首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
STROKE LESION SEGMENTATION USING MULTI-STAGE CROSS-SCALE ATTENTION. 基于多阶段跨尺度注意的脑卒中病灶分割。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980930
Liang Shang, William A Sethares, Anusha Adluru, Andrew L Alexander, Vivek Prabhakaran, Veena A Nair, Nagesh Adluru

Precise characterization of stroke lesions from MRI data has immense value in prognosticating clinical and cognitive outcomes following a stroke. Manual stroke lesion segmentation is time-consuming and requires the expertise of neurologists and neuroradiologists. Often, lesions are grossly characterized for their location and overall extent using bounding boxes without specific delineation of their boundaries. While such characterization provides some clinical value, to develop a precise mechanistic understanding of the impact of lesions on post-stroke vascular contributions to cognitive impairments and dementia (VCID), the stroke lesions need to be fully segmented with accurate boundaries. This work introduces the Multi-Stage Cross-Scale Attention (MSCSA) mechanism, applied to the U-Net family, to improve the mapping between brain structural features and lesions of varying sizes. Using the Anatomical Tracings of Lesions After Stroke (ATLAS) v2.0 dataset, MSCSA outperforms all baseline methods in both Dice and F1 scores on a subset focusing on small lesions, while maintaining competitive performance across the entire dataset. Notably, the ensemble strategy incorporating MSCSA achieves the highest scores for Dice and F1 on both the full dataset and the small lesion subset. These results demonstrate the effectiveness of MSCSA in segmenting small lesions and highlight its robustness across different training schemes for large stroke lesions. Our code is available at: https://github.com/nadluru/StrokeLesSeg.

从MRI数据中精确表征脑卒中病变在预测脑卒中后的临床和认知结果方面具有巨大的价值。手动脑卒中病变分割是费时的,需要神经学家和神经放射学家的专业知识。通常情况下,病灶的位置和总体范围使用边界框粗略表征,而没有具体划定其边界。虽然这种特征具有一定的临床价值,但为了准确地了解病变对脑卒中后血管对认知障碍和痴呆(VCID)的影响,需要对脑卒中病变进行精确的边界分割。本研究引入了多阶段跨尺度注意(Multi-Stage Cross-Scale Attention, MSCSA)机制,应用于U-Net家族,以改善脑结构特征与不同大小病变之间的映射。使用卒中后病变解剖追踪(ATLAS) v2.0数据集,MSCSA在专注于小病变的子集上,在Dice和F1得分方面优于所有基线方法,同时在整个数据集上保持竞争性能。值得注意的是,结合MSCSA的集成策略在完整数据集和小病变子集上都获得了Dice和F1的最高分。这些结果证明了MSCSA在分割小病变方面的有效性,并突出了其在不同训练方案下对大卒中病变的鲁棒性。我们的代码可在:https://github.com/nadluru/StrokeLesSeg。
{"title":"STROKE LESION SEGMENTATION USING MULTI-STAGE CROSS-SCALE ATTENTION.","authors":"Liang Shang, William A Sethares, Anusha Adluru, Andrew L Alexander, Vivek Prabhakaran, Veena A Nair, Nagesh Adluru","doi":"10.1109/isbi60581.2025.10980930","DOIUrl":"10.1109/isbi60581.2025.10980930","url":null,"abstract":"<p><p>Precise characterization of stroke lesions from MRI data has immense value in prognosticating clinical and cognitive outcomes following a stroke. Manual stroke lesion segmentation is time-consuming and requires the expertise of neurologists and neuroradiologists. Often, lesions are grossly characterized for their location and overall extent using bounding boxes without specific delineation of their boundaries. While such characterization provides some clinical value, to develop a precise mechanistic understanding of the impact of lesions on post-stroke vascular contributions to cognitive impairments and dementia (VCID), the stroke lesions need to be fully segmented with accurate boundaries. This work introduces the Multi-Stage Cross-Scale Attention (MSCSA) mechanism, applied to the U-Net family, to improve the mapping between brain structural features and lesions of varying sizes. Using the Anatomical Tracings of Lesions After Stroke (ATLAS) v2.0 dataset, MSCSA outperforms all baseline methods in both Dice and F1 scores on a subset focusing on small lesions, while maintaining competitive performance across the entire dataset. Notably, the ensemble strategy incorporating MSCSA achieves the highest scores for Dice and F1 on both the full dataset and the small lesion subset. These results demonstrate the effectiveness of MSCSA in segmenting small lesions and highlight its robustness across different training schemes for large stroke lesions. Our code is available at: https://github.com/nadluru/StrokeLesSeg.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12782145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145953868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MASKED MOMENTUM CONTRASTIVE DYNAMIC TRANSFORMER FOR SELF-SUPERVISED FUNCTIONAL CONNECTIVITY REPRESENTATION LEARNING. 自监督功能连通性表征学习的掩蔽动量对比动态变压器。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980976
Jiale Cheng, Dan Hu, Zhengwang Wu, Xinrui Yuan, Gang Li

Functional connectivity (FC) derived from functional MRI (fMRI) shows significant promise in predicting behavior and demographics using deep learning techniques. Incorporating vertex-wise FC maps, which capture fine-grained spatial details of neural activity, offers the potential to enhance FC-based prediction accuracy. However, fMRI data is inherently limited and noisy, challenging neural networks to reliably identify patterns within high-dimensional cortical vertices. Therefore, we design a novel Masked Momentum Contrastive Dynamic Transformer, which utilizes masked momentum contrastive pre-training to explore subject-specific features and enhances prediction accuracy by leveraging the temporal dynamics of FCs with a dynamic transformer. Specifically, our framework 1) learns effective subject-specific representations by treating vertex-wise FCs from different runs of an individual as distinct views and maximizing their affinity, and 2) employs a vertex-wise masking strategy to promote learning from limited data. Extensive experiments on gender classification and cognition prediction validate its superior performance on the Human Connectome Project dataset.

功能核磁共振成像(fMRI)衍生的功能连接(FC)在使用深度学习技术预测行为和人口统计学方面显示出巨大的前景。结合逐顶点的FC地图,捕捉神经活动的细粒度空间细节,提供了提高基于FC的预测精度的潜力。然而,fMRI数据本身是有限的和有噪声的,这对神经网络在高维皮层顶点内可靠地识别模式提出了挑战。因此,我们设计了一种新的掩蔽动量对比动态变压器,它利用掩蔽动量对比预训练来探索受试者特定的特征,并通过利用带有动态变压器的fc的时间动态来提高预测精度。具体来说,我们的框架1)通过将来自个体不同运行的顶点智能fc视为不同的视图并最大化其亲和力来学习有效的主题特定表示,2)采用顶点智能屏蔽策略来促进从有限数据中学习。大量的性别分类和认知预测实验验证了其在人类连接体项目数据集上的优越性能。
{"title":"MASKED MOMENTUM CONTRASTIVE DYNAMIC TRANSFORMER FOR SELF-SUPERVISED FUNCTIONAL CONNECTIVITY REPRESENTATION LEARNING.","authors":"Jiale Cheng, Dan Hu, Zhengwang Wu, Xinrui Yuan, Gang Li","doi":"10.1109/isbi60581.2025.10980976","DOIUrl":"10.1109/isbi60581.2025.10980976","url":null,"abstract":"<p><p>Functional connectivity (FC) derived from functional MRI (fMRI) shows significant promise in predicting behavior and demographics using deep learning techniques. Incorporating vertex-wise FC maps, which capture fine-grained spatial details of neural activity, offers the potential to enhance FC-based prediction accuracy. However, fMRI data is inherently limited and noisy, challenging neural networks to reliably identify patterns within high-dimensional cortical vertices. Therefore, we design a novel Masked Momentum Contrastive Dynamic Transformer, which utilizes masked momentum contrastive pre-training to explore subject-specific features and enhances prediction accuracy by leveraging the temporal dynamics of FCs with a dynamic transformer. Specifically, our framework 1) learns effective subject-specific representations by treating vertex-wise FCs from different runs of an individual as distinct views and maximizing their affinity, and 2) employs a vertex-wise masking strategy to promote learning from limited data. Extensive experiments on gender classification and cognition prediction validate its superior performance on the Human Connectome Project dataset.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DUAL MULTI-ATLAS REPRESENTATION ALIGNMENT FOR BRAIN DISORDER DIAGNOSIS USING MORPHOLOGICAL CONNECTOME. 脑形态连接体诊断的双多图谱表征比对。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10981287
Kangfu Han, Dan Hu, Jiale Cheng, Tianming Liu, Andrea Bozoki, Dajiang Zhu, Gang Li

In structural magnetic resonance imaging (MRI), morphological connectome plays an important role in capturing coordinated patterns of region-wise morphological features for brain disorder diagnosis. However, significant challenges remain in aggregating diverse representations from multiple brain atlases, stemming from variations in the definition of regions of interest. To effectively integrate complementary information from multiple atlases while mitigating possible biases, we propose a novel dual multi-atlas representation alignment approach (DMAA) for brain disorder diagnosis. Specifically, we first minimize the maximum mean discrepancy of multi-atlas representations to align them into a unified distribution, reducing inter-atlas variability and enhancing effective feature fusion. Then, to further manage the anatomical variability, we apply optimal transport to capture and harmonize region-wise differences, preserving plausible relationships across atlases. Extensive experiments on ADNI, PPMI, ADHD200, and SchizConnect datasets demonstrate the effectiveness of our proposed DMAA on brain disorder diagnosis using multi-atlas morphological connectome.

在结构磁共振成像(MRI)中,形态连接组在捕捉脑区域形态特征的协调模式方面发挥着重要作用。然而,由于感兴趣区域定义的差异,在汇总来自多个脑图谱的不同表征方面仍然存在重大挑战。为了有效地整合来自多个地图集的互补信息,同时减少可能的偏差,我们提出了一种新的双多地图集表示对齐方法(DMAA)用于脑疾病诊断。具体来说,我们首先最小化多地图集表示的最大平均差异,将它们对齐成一个统一的分布,减少地图集间的可变性,增强有效的特征融合。然后,为了进一步管理解剖变异性,我们应用最优运输来捕获和协调区域差异,在地图集之间保留合理的关系。在ADNI, PPMI, ADHD200和SchizConnect数据集上的大量实验证明了我们提出的DMAA在使用多图谱形态连接组进行脑疾病诊断方面的有效性。
{"title":"DUAL MULTI-ATLAS REPRESENTATION ALIGNMENT FOR BRAIN DISORDER DIAGNOSIS USING MORPHOLOGICAL CONNECTOME.","authors":"Kangfu Han, Dan Hu, Jiale Cheng, Tianming Liu, Andrea Bozoki, Dajiang Zhu, Gang Li","doi":"10.1109/isbi60581.2025.10981287","DOIUrl":"10.1109/isbi60581.2025.10981287","url":null,"abstract":"<p><p>In structural magnetic resonance imaging (MRI), morphological connectome plays an important role in capturing coordinated patterns of region-wise morphological features for brain disorder diagnosis. However, significant challenges remain in aggregating diverse representations from multiple brain atlases, stemming from variations in the definition of regions of interest. To effectively integrate complementary information from multiple atlases while mitigating possible biases, we propose a novel dual multi-atlas representation alignment approach (DMAA) for brain disorder diagnosis. Specifically, we first minimize the maximum mean discrepancy of multi-atlas representations to align them into a unified distribution, reducing inter-atlas variability and enhancing effective feature fusion. Then, to further manage the anatomical variability, we apply optimal transport to capture and harmonize region-wise differences, preserving plausible relationships across atlases. Extensive experiments on ADNI, PPMI, ADHD200, and SchizConnect datasets demonstrate the effectiveness of our proposed DMAA on brain disorder diagnosis using multi-atlas morphological connectome.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12178661/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EVIT-UNET: U-NET LIKE EFFICIENT VISION TRANSFORMER FOR MEDICAL IMAGE SEGMENTATION ON MOBILE AND EDGE DEVICES. Evit-unet:类似于u-net的高效视觉转换器,用于移动和边缘设备上的医学图像分割。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10981108
Xin Li, Wenhui Zhu, Xuanzhao Dong, Oana M Dumitrascu, Yalin Wang

With the rapid development of deep learning, CNN-based U-shaped networks have succeeded in medical image segmentation and are widely applied for various tasks. However, their limitations in capturing global features hinder their performance in complex segmentation tasks. The rise of Vision Transformer (ViT) has effectively compensated for this deficiency of CNNs and promoted the application of ViT-based U-networks in medical image segmentation. However, the high computational demands of ViT make it unsuitable for many medical devices and mobile platforms with limited resources, restricting its deployment on resource-constrained and edge devices. To address this, we propose EViT-UNet, an efficient ViT-based segmentation network that reduces computational complexity while maintaining accuracy, making it ideal for resource-constrained medical devices. EViT-UNet is built on a U-shaped architecture, comprising an encoder, decoder, bottleneck layer, and skip connections, combining convolutional operations with self-attention mechanisms to optimize efficiency. Experimental results demonstrate that EViT-UNet achieves high accuracy in medical image segmentation while significantly reducing computational complexity. The code is available at https://github.com/Retinal-Research/EVIT-UNET.

随着深度学习的快速发展,基于cnn的u型网络在医学图像分割中取得了成功,并被广泛应用于各种任务中。然而,它们在捕获全局特征方面的局限性阻碍了它们在复杂分割任务中的性能。视觉转换器(Vision Transformer, ViT)的出现有效地弥补了cnn的这一不足,促进了基于视觉转换器的u网络在医学图像分割中的应用。然而,ViT的高计算需求使得它不适合许多资源有限的医疗设备和移动平台,限制了它在资源受限和边缘设备上的部署。为了解决这个问题,我们提出了EViT-UNet,这是一种高效的基于vit的分割网络,在保持准确性的同时降低了计算复杂性,使其成为资源受限医疗设备的理想选择。EViT-UNet基于u型架构,包括编码器、解码器、瓶颈层和跳过连接,将卷积运算与自关注机制相结合,以优化效率。实验结果表明,EViT-UNet在医学图像分割中取得了较高的精度,同时显著降低了计算复杂度。代码可在https://github.com/Retinal-Research/EVIT-UNET上获得。
{"title":"EVIT-UNET: U-NET LIKE EFFICIENT VISION TRANSFORMER FOR MEDICAL IMAGE SEGMENTATION ON MOBILE AND EDGE DEVICES.","authors":"Xin Li, Wenhui Zhu, Xuanzhao Dong, Oana M Dumitrascu, Yalin Wang","doi":"10.1109/isbi60581.2025.10981108","DOIUrl":"10.1109/isbi60581.2025.10981108","url":null,"abstract":"<p><p>With the rapid development of deep learning, CNN-based U-shaped networks have succeeded in medical image segmentation and are widely applied for various tasks. However, their limitations in capturing global features hinder their performance in complex segmentation tasks. The rise of Vision Transformer (ViT) has effectively compensated for this deficiency of CNNs and promoted the application of ViT-based U-networks in medical image segmentation. However, the high computational demands of ViT make it unsuitable for many medical devices and mobile platforms with limited resources, restricting its deployment on resource-constrained and edge devices. To address this, we propose EViT-UNet, an efficient ViT-based segmentation network that reduces computational complexity while maintaining accuracy, making it ideal for resource-constrained medical devices. EViT-UNet is built on a U-shaped architecture, comprising an encoder, decoder, bottleneck layer, and skip connections, combining convolutional operations with self-attention mechanisms to optimize efficiency. Experimental results demonstrate that EViT-UNet achieves high accuracy in medical image segmentation while significantly reducing computational complexity. The code is available at https://github.com/Retinal-Research/EVIT-UNET.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12337706/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCAR-AWARE LATE MECHANICAL ACTIVATION DETECTION NETWORK FOR OPTIMAL CARDIAC RESYNCHRONIZATION THERAPY PLANNING. 用于心脏再同步化治疗计划的疤痕感知晚期机械激活检测网络。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980862
Jiarui Xing, Shuo Wang, Amit R Patel, Kenneth C Bilchick, Frederick H Epstein, Miaomiao Zhang

Accurate identification of late mechanical activation (LMA) regions is crucial for optimal cardiac resynchronization therapy (CRT) lead implantation. However, existing approaches using cardiac magnetic resonance (CMR) imaging often overlook myocardial scar information, which may be mistakenly identified as delayed activation regions. To address this issue, we propose a scar-aware LMA detection network that simultaneously detects myocardial scar and prevents LMA localization in these scarred regions. More specifically, our model integrates a pre-trained scar segmentation network using late gadolinium enhancement (LGE) CMRs into a LMA detection network based on highly accurate strain derived from displacement encoding with stimulated echoes (DENSE) CMRs. We introduce a novel scar-aware loss function that utilizes the segmented scar information to discourage false-positive detections of late activated areas. Our model can be trained with or without paired LGE data. During inference, our model does not require the input of LGE images, leveraging learned patterns from strain data alone to mitigate false-positive LMA detection in potential scar regions. We evaluate our model on subjects with and without myocardial scar, demonstrating significantly improved LMA detection accuracy in both scenarios. Our work paves the way for improved CRT planning, potentially leading to better patient outcomes.

准确识别晚期机械激活(LMA)区域对于优化心脏再同步化治疗(CRT)导联植入至关重要。然而,使用心脏磁共振(CMR)成像的现有方法经常忽略心肌疤痕信息,这可能被错误地识别为延迟激活区域。为了解决这个问题,我们提出了一个疤痕感知的LMA检测网络,它可以同时检测心肌疤痕并防止LMA定位在这些疤痕区域。更具体地说,我们的模型将使用晚期钆增强(LGE) cmr的预训练疤痕分割网络集成到基于由受刺激回波(DENSE) cmr的位移编码产生的高精度应变的LMA检测网络中。我们引入了一种新的疤痕感知损失函数,该函数利用分割的疤痕信息来阻止后期激活区域的假阳性检测。我们的模型可以使用或不使用配对LGE数据进行训练。在推理过程中,我们的模型不需要LGE图像的输入,仅利用从应变数据中学习的模式来减轻潜在疤痕区域的LMA检测假阳性。我们在有心肌疤痕和没有心肌疤痕的受试者上评估了我们的模型,结果表明在这两种情况下LMA的检测精度都有显著提高。我们的工作为改进CRT计划铺平了道路,可能会带来更好的患者结果。
{"title":"SCAR-AWARE LATE MECHANICAL ACTIVATION DETECTION NETWORK FOR OPTIMAL CARDIAC RESYNCHRONIZATION THERAPY PLANNING.","authors":"Jiarui Xing, Shuo Wang, Amit R Patel, Kenneth C Bilchick, Frederick H Epstein, Miaomiao Zhang","doi":"10.1109/isbi60581.2025.10980862","DOIUrl":"10.1109/isbi60581.2025.10980862","url":null,"abstract":"<p><p>Accurate identification of late mechanical activation (LMA) regions is crucial for optimal cardiac resynchronization therapy (CRT) lead implantation. However, existing approaches using cardiac magnetic resonance (CMR) imaging often overlook myocardial scar information, which may be mistakenly identified as delayed activation regions. To address this issue, we propose a scar-aware LMA detection network that simultaneously detects myocardial scar and prevents LMA localization in these scarred regions. More specifically, our model integrates a pre-trained scar segmentation network using late gadolinium enhancement (LGE) CMRs into a LMA detection network based on highly accurate strain derived from displacement encoding with stimulated echoes (DENSE) CMRs. We introduce a novel scar-aware loss function that utilizes the segmented scar information to discourage false-positive detections of late activated areas. Our model can be trained with or without paired LGE data. During inference, our model does not require the input of LGE images, leveraging learned patterns from strain data alone to mitigate false-positive LMA detection in potential scar regions. We evaluate our model on subjects with and without myocardial scar, demonstrating significantly improved LMA detection accuracy in both scenarios. Our work paves the way for improved CRT planning, potentially leading to better patient outcomes.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12467527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LOGSAGE: LOG-BASED SALIENCY FOR GUIDED ENCODING IN ROBUST NUCLEI SEGMENTATION OF IMMUNOFLUORESCENCE HISTOLOGY IMAGES. 对数:基于对数的显著性引导编码在免疫荧光组织学图像的鲁棒核分割。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/ISBI60581.2025.10980672
Sahar A Mohammed, Siyavash Shabani, Muhammad Sohaib, Corina Nicolescu, Mary Helen Barcellos-Hoff, Bahram Parvin

The tumor microenvironment (TME) is critical in cancer progression, development, and treatment response. However, its complex cellular architecture (e.g., cell type, organization) presents significant challenges for accurate immunofluorescence (IF) image segmentation. We introduce LoGSAGE-Net (LoG-based SAliency for Guided Encoding), which couples a Swin Transformer with the encoded response from Laplacian of Gaussian (LoG) on multiple scales. The loss function incorporates two deformation metrics, combining the Dice- and curvature alignment loss. The model is applied to a large cohort of preclinical data and has shown an improved performance over the state-of-the-art methods. The proposed model achieved a Dice score of 94.92% and a Panoptic Quality (PQ) score of 81%. This model supports robust profiling of the TME for sensitive assays.

肿瘤微环境(TME)在癌症的进展、发展和治疗反应中起着至关重要的作用。然而,其复杂的细胞结构(如细胞类型、组织)对准确的免疫荧光(IF)图像分割提出了重大挑战。我们介绍了LoGSAGE-Net (LoG-based SAliency for Guided Encoding),它在多个尺度上将Swin变压器与来自高斯拉普拉斯函数(LoG)的编码响应耦合在一起。损失函数包含两个变形度量,结合Dice-和曲率对齐损失。该模型适用于临床前数据的大队列,并已显示优于最先进的方法的性能。该模型的Dice得分为94.92%,Panoptic Quality (PQ)得分为81%。该模型支持对TME进行敏感分析的稳健分析。
{"title":"LOGSAGE: LOG-BASED SALIENCY FOR GUIDED ENCODING IN ROBUST NUCLEI SEGMENTATION OF IMMUNOFLUORESCENCE HISTOLOGY IMAGES.","authors":"Sahar A Mohammed, Siyavash Shabani, Muhammad Sohaib, Corina Nicolescu, Mary Helen Barcellos-Hoff, Bahram Parvin","doi":"10.1109/ISBI60581.2025.10980672","DOIUrl":"10.1109/ISBI60581.2025.10980672","url":null,"abstract":"<p><p>The tumor microenvironment (TME) is critical in cancer progression, development, and treatment response. However, its complex cellular architecture (e.g., cell type, organization) presents significant challenges for accurate immunofluorescence (IF) image segmentation. We introduce <b>LoGSAGE</b>-Net (<b>LoG</b>-based <b>SA</b>liency for <b>G</b>uided <b>E</b>ncoding), which couples a Swin Transformer with the encoded response from Laplacian of Gaussian (LoG) on multiple scales. The loss function incorporates two deformation metrics, combining the Dice- and curvature alignment loss. The model is applied to a large cohort of preclinical data and has shown an improved performance over the state-of-the-art methods. The proposed model achieved a Dice score of 94.92% and a Panoptic Quality (PQ) score of 81%. This model supports robust profiling of the TME for sensitive assays.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12735128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AUTOENCODER FOR 4-DIMENSIONAL FIBER ORIENTATION DISTRIBUTIONS FROM DIFFUSION MRI. 自编码器的四维纤维取向分布从扩散mri。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10981302
Shuo Huang, Lujia Zhong, Yonggang Shi

Fiber orientation distributions (FODs) are widely used in connectome analysis based on diffusion MRI. Spherical harmonics (SPHARMs) are often used for the efficient representation of FODs; however, SPHARMs over the 3-D image volume are in essence four-dimensional. This makes it highly memory-consuming for applying advanced deep learning methods, such as the transformer and diffusion model, to FODs represented by high order SPHARMs. In this work, we present an order-balanced order-level (OBOL) autoencoder to compress the FODs with high accuracy after decoding. Our OBOL method uses separate encoders for FODs in each SPHARM order to balance the feature map size of FODs in different orders. This helps the encoder to better preserve information from the low-order coefficients that have more information but a smaller number of volumes. In our experiments, we demonstrated that the decoded FODs of our OBOL autoencoder have better accuracy than the spatial-level or order-level autoencoder without order balance. We also tested the encoded latent space of the OBOL autoencoder in FOD super-resolution. Results show high accuracy with feasible memory usage in commonly available GPUs.

纤维取向分布(FODs)广泛应用于基于弥散MRI的连接体分析。球面谐波(SPHARMs)通常用于有效表示FODs;然而,三维图像体积上的SPHARMs本质上是四维的。这使得将变压器和扩散模型等高级深度学习方法应用于以高阶spharm为代表的fod时,内存消耗非常大。在这项工作中,我们提出了一种顺序平衡顺序级(OBOL)自动编码器,用于解码后高精度压缩fod。我们的OBOL方法为每个SPHARM顺序的fod使用单独的编码器,以平衡不同顺序的fod的特征映射大小。这有助于编码器更好地从具有更多信息但体积数量较少的低阶系数中保存信息。在实验中,我们证明了我们的OBOL自编码器解码的FODs比没有顺序平衡的空间级或顺序级自编码器具有更好的精度。我们还测试了FOD超分辨率下OBOL自编码器的编码潜空间。结果表明,在常用的gpu中,在可行的内存使用情况下,具有较高的准确性。
{"title":"AUTOENCODER FOR 4-DIMENSIONAL FIBER ORIENTATION DISTRIBUTIONS FROM DIFFUSION MRI.","authors":"Shuo Huang, Lujia Zhong, Yonggang Shi","doi":"10.1109/isbi60581.2025.10981302","DOIUrl":"10.1109/isbi60581.2025.10981302","url":null,"abstract":"<p><p>Fiber orientation distributions (FODs) are widely used in connectome analysis based on diffusion MRI. Spherical harmonics (SPHARMs) are often used for the efficient representation of FODs; however, SPHARMs over the 3-D image volume are in essence four-dimensional. This makes it highly memory-consuming for applying advanced deep learning methods, such as the transformer and diffusion model, to FODs represented by high order SPHARMs. In this work, we present an order-balanced order-level (OBOL) autoencoder to compress the FODs with high accuracy after decoding. Our OBOL method uses separate encoders for FODs in each SPHARM order to balance the feature map size of FODs in different orders. This helps the encoder to better preserve information from the low-order coefficients that have more information but a smaller number of volumes. In our experiments, we demonstrated that the decoded FODs of our OBOL autoencoder have better accuracy than the spatial-level or order-level autoencoder without order balance. We also tested the encoded latent space of the OBOL autoencoder in FOD super-resolution. Results show high accuracy with feasible memory usage in commonly available GPUs.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12140619/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144236185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TOWARDS PATIENT-SPECIFIC SURGICAL PLANNING FOR BICUSPID AORTIC VALVE REPAIR: FULLY AUTOMATED SEGMENTATION OF THE AORTIC VALVE IN 4D CT. 针对患者的二尖瓣主动脉瓣修复手术计划:4d ct全自动主动脉瓣分割。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/ISBI60581.2025.10981269
Zaiyang Guo, Ningjun J Dong, Harold Litt, Natalie Yushkevich, Melanie Freas, Jessica Nunez, Victor Ferrari, Jilei Hao, Shir Goldfinger, Matthew A Jolley, Joseph Bavaria, Nimesh Desai, Alison M Pouch

The bicuspid aortic valve (BAV) is the most prevalent congenital heart defect and may require surgery for complications such as stenosis, regurgitation, and aortopathy. BAV repair surgery is effective but challenging due to the heterogeneity of BAV morphology. Multiple imaging modalities can be employed to assist the quantitative assessment of BAVs for surgical planning. Contrast-enhanced 4D computed tomography (CT) produces volumetric temporal sequences with excellent contrast and spatial resolution. Segmentation of the aortic cusps and root in these images is an essential step in creating patient-specific models for visualization and quantification. While deep learning-based methods are capable of fully automated segmentation, no BAV-specific model exists. Among valve segmentation studies, there has been limited quantitative assessment of the clinical usability of the segmentation results. In this work, we developed a fully automated multi-label BAV segmentation pipeline based on nnU-Net. The predicted segmentations were used to carry out surgically relevant morphological measurements including geometric cusp height, commissural angle and annulus diameter, and the results were compared against manual segmentation. Automated segmentation achieved average Dice scores of over 0.7 and symmetric mean distance below 0.7 mm for all three aortic cusps and the root wall. Clinically relevant benchmarks showed good consistency between manual and predicted segmentations. Overall, fully automated BAV segmentation of 3D frames in 4D CT can produce clinically usable measurements for surgical risk stratification, but the temporal consistency of segmentations needs to be improved.

二尖瓣主动脉瓣(BAV)是最常见的先天性心脏缺陷,可能需要手术治疗并发症,如狭窄、反流和主动脉病变。由于BAV形态的异质性,BAV修复手术是有效的,但具有挑战性。多种成像方式可用于辅助bav的定量评估,以制定手术计划。对比度增强的4D计算机断层扫描(CT)产生具有优异对比度和空间分辨率的体积时间序列。在这些图像中分割主动脉尖和主动脉根是创建用于可视化和量化的患者特定模型的重要步骤。虽然基于深度学习的方法能够实现全自动分割,但目前还没有针对bav的模型。在瓣膜分割研究中,对分割结果的临床可用性的定量评估有限。在这项工作中,我们开发了一个基于nnU-Net的全自动多标签BAV分割流水线。预测的分割被用来进行手术相关的形态学测量,包括几何尖端高度、关节角和环直径,并将结果与人工分割进行比较。自动分割的结果显示,三个主动脉尖和根壁的平均Dice得分均在0.7以上,对称平均距离小于0.7 mm。临床相关基准显示,人工分割和预测分割之间具有良好的一致性。总体而言,4D CT三维框架的全自动BAV分割可以产生临床可用的手术风险分层测量,但分割的时间一致性有待提高。
{"title":"TOWARDS PATIENT-SPECIFIC SURGICAL PLANNING FOR BICUSPID AORTIC VALVE REPAIR: FULLY AUTOMATED SEGMENTATION OF THE AORTIC VALVE IN 4D CT.","authors":"Zaiyang Guo, Ningjun J Dong, Harold Litt, Natalie Yushkevich, Melanie Freas, Jessica Nunez, Victor Ferrari, Jilei Hao, Shir Goldfinger, Matthew A Jolley, Joseph Bavaria, Nimesh Desai, Alison M Pouch","doi":"10.1109/ISBI60581.2025.10981269","DOIUrl":"10.1109/ISBI60581.2025.10981269","url":null,"abstract":"<p><p>The bicuspid aortic valve (BAV) is the most prevalent congenital heart defect and may require surgery for complications such as stenosis, regurgitation, and aortopathy. BAV repair surgery is effective but challenging due to the heterogeneity of BAV morphology. Multiple imaging modalities can be employed to assist the quantitative assessment of BAVs for surgical planning. Contrast-enhanced 4D computed tomography (CT) produces volumetric temporal sequences with excellent contrast and spatial resolution. Segmentation of the aortic cusps and root in these images is an essential step in creating patient-specific models for visualization and quantification. While deep learning-based methods are capable of fully automated segmentation, no BAV-specific model exists. Among valve segmentation studies, there has been limited quantitative assessment of the clinical usability of the segmentation results. In this work, we developed a fully automated multi-label BAV segmentation pipeline based on nnU-Net. The predicted segmentations were used to carry out surgically relevant morphological measurements including geometric cusp height, commissural angle and annulus diameter, and the results were compared against manual segmentation. Automated segmentation achieved average Dice scores of over 0.7 and symmetric mean distance below 0.7 mm for all three aortic cusps and the root wall. Clinically relevant benchmarks showed good consistency between manual and predicted segmentations. Overall, fully automated BAV segmentation of 3D frames in 4D CT can produce clinically usable measurements for surgical risk stratification, but the temporal consistency of segmentations needs to be improved.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12237532/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144593151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PLASMA-CYCLEGAN: PLASMA BIOMARKER-GUIDED MRI TO PET CROSS-MODALITY TRANSLATION USING CONDITIONAL CYCLEGAN. 血浆循环素:血浆生物标志物引导的磁共振成像,以pet跨模态翻译使用条件循环素。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980900
Yanxi Chen, Yi Su, Celine Dumitrascu, Kewei Chen, David Weidman, Richard J Caselli, Nicholas Ashton, Eric M Reiman, Yalin Wang

Cross-modality translation between MRI and PET imaging is challenging due to the distinct mechanisms underlying these modalities. Blood-based biomarkers (BBBMs) are revolutionizing Alzheimer's disease (AD) detection by identifying patients and quantifying brain amyloid levels. However, the potential of BBBMs to enhance PET image synthesis remains unexplored. In this paper, we performed a thorough study on the effect of incorporating BBBM into deep generative models. By evaluating three widely used cross-modality translation models, we found that BBBMs integration consistently enhances the generative quality across all models. By visual inspection of the generated results, we observed that PET images generated by CycleGAN exhibit the best visual fidelity. Based on these findings, we propose Plasma-CycleGAN, a novel generative model based on CycleGAN, to synthesize PET images from MRI using BBBMs as conditions. This is the first approach to integrate BBBMs in conditional cross-modality translation between MRI and PET.

由于这些模式背后的不同机制,MRI和PET成像之间的跨模态转换具有挑战性。基于血液的生物标志物(bbbm)通过识别患者和量化脑淀粉样蛋白水平,正在彻底改变阿尔茨海默病(AD)的检测。然而,bbbm增强PET图像合成的潜力仍未被探索。在本文中,我们对将BBBM纳入深度生成模型的效果进行了深入的研究。通过对三种被广泛使用的跨模态翻译模型进行评估,我们发现bbbm的集成一致地提高了所有模型的生成质量。通过视觉检查生成的结果,我们观察到CycleGAN生成的PET图像具有最佳的视觉保真度。基于这些发现,我们提出了一种新的基于CycleGAN的生成模型Plasma-CycleGAN,以bbbm为条件合成MRI的PET图像。这是第一个将bbbm整合到MRI和PET之间条件交叉模态转换中的方法。
{"title":"PLASMA-CYCLEGAN: PLASMA BIOMARKER-GUIDED MRI TO PET CROSS-MODALITY TRANSLATION USING CONDITIONAL CYCLEGAN.","authors":"Yanxi Chen, Yi Su, Celine Dumitrascu, Kewei Chen, David Weidman, Richard J Caselli, Nicholas Ashton, Eric M Reiman, Yalin Wang","doi":"10.1109/isbi60581.2025.10980900","DOIUrl":"10.1109/isbi60581.2025.10980900","url":null,"abstract":"<p><p>Cross-modality translation between MRI and PET imaging is challenging due to the distinct mechanisms underlying these modalities. Blood-based biomarkers (BBBMs) are revolutionizing Alzheimer's disease (AD) detection by identifying patients and quantifying brain amyloid levels. However, the potential of BBBMs to enhance PET image synthesis remains unexplored. In this paper, we performed a thorough study on the effect of incorporating BBBM into deep generative models. By evaluating three widely used cross-modality translation models, we found that BBBMs integration consistently enhances the generative quality across all models. By visual inspection of the generated results, we observed that PET images generated by CycleGAN exhibit the best visual fidelity. Based on these findings, we propose Plasma-CycleGAN, a novel generative model based on CycleGAN, to synthesize PET images from MRI using BBBMs as conditions. This is the first approach to integrate BBBMs in conditional cross-modality translation between MRI and PET.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352453/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144877198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INDIVIDUALIZED TRAJECTORY PREDICTION OF EARLY DEVELOPING FUNCTIONAL CONNECTIVITY. 早期功能连接发展的个体化轨迹预测。
Pub Date : 2025-04-01 Epub Date: 2025-05-12 DOI: 10.1109/isbi60581.2025.10980810
Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Zhengwang Wu, Li Wang, Weili Lin, Gang Li

Predicting the development of functional connectivity (FC) derived from resting-state functional MRI is pivotal for elucidating the intrinsic brain functional organization and modeling its dynamic development during infancy. Existing deep learning methods typically predict FC at a target timepoint from each available FC independently, yielding inconsistent predictions and overlooking longitudinal dependencies, which introduce ambiguity in practical applications. Furthermore, the scarcity and irregular distribution of longitudinal rs-fMRI data pose significant challenges in accurately predicting and delineating the trajectories of early brain functional development. To address these issues, we propose a novel Triplet Cycle-Consistent Masked Autoencoder (TC-MAE) for the trajectory prediction of the development of infant FC. Our TC-MAE has the capability to traverse FC over an extended period, extract unique individual characteristics, and predict target FC at any given age in infancy with longitudinal consistency. Extensive experiments on 368 longitudinal infant rs-fMRI scans demonstrate the superior performance of the proposed method in longitudinal FC prediction compared with state-of-the-art approaches.

静息状态功能MRI预测功能连接(FC)的发展是阐明婴儿期脑内在功能组织和模拟其动态发展的关键。现有的深度学习方法通常会独立地从每个可用的FC中预测目标时间点的FC,从而产生不一致的预测,并且忽略了纵向依赖关系,这在实际应用中引入了模糊性。此外,纵向rs-fMRI数据的稀缺性和不规则分布给准确预测和描绘早期脑功能发展轨迹带来了重大挑战。为了解决这些问题,我们提出了一种新的三联体周期一致掩码自编码器(TC-MAE),用于预测婴儿FC的发展轨迹。我们的TC-MAE能够在很长一段时间内遍历FC,提取独特的个体特征,并在纵向一致性的情况下预测婴儿期任何给定年龄的目标FC。对368个纵向婴儿rs-fMRI扫描的大量实验表明,与最先进的方法相比,所提出的方法在纵向FC预测方面具有优越的性能。
{"title":"INDIVIDUALIZED TRAJECTORY PREDICTION OF EARLY DEVELOPING FUNCTIONAL CONNECTIVITY.","authors":"Weiran Xia, Xin Zhang, Dan Hu, Jiale Cheng, Zhengwang Wu, Li Wang, Weili Lin, Gang Li","doi":"10.1109/isbi60581.2025.10980810","DOIUrl":"10.1109/isbi60581.2025.10980810","url":null,"abstract":"<p><p>Predicting the development of functional connectivity (FC) derived from resting-state functional MRI is pivotal for elucidating the intrinsic brain functional organization and modeling its dynamic development during infancy. Existing deep learning methods typically predict FC at a target timepoint from each available FC independently, yielding inconsistent predictions and overlooking longitudinal dependencies, which introduce ambiguity in practical applications. Furthermore, the scarcity and irregular distribution of longitudinal rs-fMRI data pose significant challenges in accurately predicting and delineating the trajectories of early brain functional development. To address these issues, we propose a novel Triplet Cycle-Consistent Masked Autoencoder (TC-MAE) for the trajectory prediction of the development of infant FC. Our TC-MAE has the capability to traverse FC over an extended period, extract unique individual characteristics, and predict target FC at any given age in infancy with longitudinal consistency. Extensive experiments on 368 longitudinal infant rs-fMRI scans demonstrate the superior performance of the proposed method in longitudinal FC prediction compared with state-of-the-art approaches.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12490125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145234347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1