首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Generating Realistic Brain MRIs via a Conditional Diffusion Probabilistic Model. 通过条件扩散概率模型生成逼真的大脑 MRI 图像
Wei Peng, Ehsan Adeli, Tomas Bosschieter, Sang Hyun Park, Qingyu Zhao, Kilian M Pohl

As acquiring MRIs is expensive, neuroscience studies struggle to attain a sufficient number of them for properly training deep learning models. This challenge could be reduced by MRI synthesis, for which Generative Adversarial Networks (GANs) are popular. GANs, however, are commonly unstable and struggle with creating diverse and high-quality data. A more stable alternative is Diffusion Probabilistic Models (DPMs) with a fine-grained training strategy. To overcome their need for extensive computational resources, we propose a conditional DPM (cDPM) with a memory-efficient process that generates realistic-looking brain MRIs. To this end, we train a 2D cDPM to generate an MRI subvolume conditioned on another subset of slices from the same MRI. By generating slices using arbitrary combinations between condition and target slices, the model only requires limited computational resources to learn interdependencies between slices even if they are spatially far apart. After having learned these dependencies via an attention network, a new anatomy-consistent 3D brain MRI is generated by repeatedly applying the cDPM. Our experiments demonstrate that our method can generate high-quality 3D MRIs that share a similar distribution to real MRIs while still diversifying the training set. The code is available at https://github.com/xiaoiker/mask3DMRI_diffusion and also will be released as part of MONAI, at https://github.com/Project-MONAI/GenerativeModels.

由于磁共振成像的获取成本高昂,神经科学研究很难获得足够数量的磁共振成像来正确训练深度学习模型。通过磁共振成像合成可以减少这一挑战,生成对抗网络(GAN)在这方面很受欢迎。然而,GANs 通常不稳定,难以创建多样化和高质量的数据。更稳定的替代方案是采用细粒度训练策略的扩散概率模型(DPM)。为了克服对大量计算资源的需求,我们提出了一种条件 DPM(cDPM),它具有记忆效率高的过程,能生成逼真的大脑 MRI。为此,我们对二维 cDPM 进行训练,以生成以同一 MRI 的另一个切片子集为条件的 MRI 子卷。通过使用条件切片和目标切片之间的任意组合生成切片,该模型只需要有限的计算资源就能学习切片之间的相互依存关系,即使它们在空间上相距甚远。通过注意力网络学习到这些依赖关系后,重复应用 cDPM 就能生成新的解剖一致的三维大脑 MRI。实验证明,我们的方法可以生成高质量的三维核磁共振成像,其分布与真实核磁共振成像相似,同时还能使训练集多样化。代码可在 https://github.com/xiaoiker/mask3DMRI_diffusion 上获取,也将作为 MONAI 的一部分在 https://github.com/Project-MONAI/GenerativeModels 上发布。
{"title":"Generating Realistic Brain MRIs via a Conditional Diffusion Probabilistic Model.","authors":"Wei Peng, Ehsan Adeli, Tomas Bosschieter, Sang Hyun Park, Qingyu Zhao, Kilian M Pohl","doi":"10.1007/978-3-031-43993-3_2","DOIUrl":"10.1007/978-3-031-43993-3_2","url":null,"abstract":"<p><p>As acquiring MRIs is expensive, neuroscience studies struggle to attain a sufficient number of them for properly training deep learning models. This challenge could be reduced by MRI synthesis, for which Generative Adversarial Networks (GANs) are popular. GANs, however, are commonly unstable and struggle with creating diverse and high-quality data. A more stable alternative is Diffusion Probabilistic Models (DPMs) with a fine-grained training strategy. To overcome their need for extensive computational resources, we propose a conditional DPM (cDPM) with a memory-efficient process that generates realistic-looking brain MRIs. To this end, we train a 2D cDPM to generate an MRI subvolume conditioned on another subset of slices from the same MRI. By generating slices using arbitrary combinations between condition and target slices, the model only requires limited computational resources to learn interdependencies between slices even if they are spatially far apart. After having learned these dependencies via an attention network, a new anatomy-consistent 3D brain MRI is generated by repeatedly applying the cDPM. Our experiments demonstrate that our method can generate high-quality 3D MRIs that share a similar distribution to real MRIs while still diversifying the training set. The code is available at https://github.com/xiaoiker/mask3DMRI_diffusion and also will be released as part of MONAI, at https://github.com/Project-MONAI/GenerativeModels.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14227 ","pages":"14-24"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10758344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139089834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Expected Appearances for Intraoperative Registration during Neurosurgery. 学习神经外科手术中术中注册的预期外观
Nazim Haouchine, Reuben Dorent, Parikshit Juvekar, Erickson Torio, William M Wells, Tina Kapur, Alexandra J Golby, Sarah Frisken

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

我们提出了一种通过学习预期外观进行术中患者与图像配准的新方法。我们的方法利用术前成像,通过手术显微镜合成患者特定的预期视图,以预测变换范围。我们的方法通过最小化术中光学显微镜二维视图与合成的预期纹理之间的差异来估计相机姿态。与传统方法不同的是,我们的方法将处理任务转移到术前阶段,从而减少了低分辨率、扭曲和嘈杂的术中图像的影响,因为这些图像通常会降低配准精度。我们将这种方法应用于脑外科手术中的神经导航。我们在合成数据和 6 个临床病例的回顾性数据上对我们的方法进行了评估。我们的方法优于最先进的方法,并达到了目前的临床标准。
{"title":"Learning Expected Appearances for Intraoperative Registration during Neurosurgery.","authors":"Nazim Haouchine, Reuben Dorent, Parikshit Juvekar, Erickson Torio, William M Wells, Tina Kapur, Alexandra J Golby, Sarah Frisken","doi":"10.1007/978-3-031-43996-4_22","DOIUrl":"10.1007/978-3-031-43996-4_22","url":null,"abstract":"<p><p>We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"227-237"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10870253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139901119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy. Mesh2SSM:从表面网格到解剖学统计形状模型。
Krithika Iyer, Shireen Elhabian

Statistical shape modeling is the computational process of discovering significant shape parameters from segmented anatomies captured by medical images (such as MRI and CT scans), which can fully describe subject-specific anatomy in the context of a population. The presence of substantial non-linear variability in human anatomy often makes the traditional shape modeling process challenging. Deep learning techniques can learn complex non-linear representations of shapes and generate statistical shape models that are more faithful to the underlying population-level variability. However, existing deep learning models still have limitations and require established/optimized shape models for training. We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes, forming a correspondence-based shape model. Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection. The proposed method operates directly on meshes and is computationally efficient, making it an attractive alternative to traditional and deep learning-based SSM approaches.

统计形状建模是从医学图像(如核磁共振成像和 CT 扫描)捕获的分割解剖图中发现重要形状参数的计算过程,它可以在群体背景下全面描述特定对象的解剖结构。人体解剖学中存在大量非线性变化,这往往使传统的形状建模过程充满挑战。深度学习技术可以学习形状的复杂非线性表示,并生成更忠实于潜在群体水平变异性的统计形状模型。然而,现有的深度学习模型仍有局限性,需要已建立/优化的形状模型进行训练。我们提出的 Mesh2SSM 是一种新方法,它利用无监督、置换不变的表征学习来估计如何将模板点云变形为特定对象的网格,从而形成基于对应关系的形状模型。Mesh2SSM 还能学习特定人群的模板,减少模板选择造成的偏差。所提出的方法可直接在网格上运行,而且计算效率高,是传统和基于深度学习的 SSM 方法的一个有吸引力的替代方案。
{"title":"Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy.","authors":"Krithika Iyer, Shireen Elhabian","doi":"10.1007/978-3-031-43907-0_59","DOIUrl":"10.1007/978-3-031-43907-0_59","url":null,"abstract":"<p><p>Statistical shape modeling is the computational process of discovering significant shape parameters from segmented anatomies captured by medical images (such as MRI and CT scans), which can fully describe subject-specific anatomy in the context of a population. The presence of substantial non-linear variability in human anatomy often makes the traditional shape modeling process challenging. Deep learning techniques can learn complex non-linear representations of shapes and generate statistical shape models that are more faithful to the underlying population-level variability. However, existing deep learning models still have limitations and require established/optimized shape models for training. We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes, forming a correspondence-based shape model. Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection. The proposed method operates directly on meshes and is computationally efficient, making it an attractive alternative to traditional and deep learning-based SSM approaches.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14220 ","pages":"615-625"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11036176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental Learning for Heterogeneous Structure Segmentation in Brain Tumor MRI. 脑肿瘤磁共振成像中异质结构分割的增量学习
Xiaofeng Liu, Helen A Shih, Fangxu Xing, Emiliano Santarnecchi, Georges El Fakhri, Jonghye Woo

Deep learning (DL) models for segmenting various anatomical structures have achieved great success via a static DL model that is trained in a single source domain. Yet, the static DL model is likely to perform poorly in a continually evolving environment, requiring appropriate model updates. In an incremental learning setting, we would expect that well-trained static models are updated, following continually evolving target domain data-e.g., additional lesions or structures of interest-collected from different sites, without catastrophic forgetting. This, however, poses challenges, due to distribution shifts, additional structures not seen during the initial model training, and the absence of training data in a source domain. To address these challenges, in this work, we seek to progressively evolve an "off-the-shelf" trained segmentation model to diverse datasets with additional anatomical categories in a unified manner. Specifically, we first propose a divergence-aware dual-flow module with balanced rigidity and plasticity branches to decouple old and new tasks, which is guided by continuous batch renormalization. Then, a complementary pseudo-label training scheme with self-entropy regularized momentum MixUp decay is developed for adaptive network optimization. We evaluated our framework on a brain tumor segmentation task with continually changing target domains-i.e., new MRI scanners/modalities with incremental structures. Our framework was able to well retain the discriminability of previously learned structures, hence enabling the realistic life-long segmentation model extension along with the widespread accumulation of big medical data.

用于分割各种解剖结构的深度学习(DL)模型通过在单一源域中训练的静态 DL 模型取得了巨大成功。然而,静态 DL 模型在不断发展的环境中很可能表现不佳,这就需要对模型进行适当的更新。在增量学习环境中,我们希望训练有素的静态模型能根据不断变化的目标领域数据(如从不同部位收集的额外病变或感兴趣的结构)进行更新,而不会出现灾难性遗忘。然而,由于分布变化、初始模型训练期间未见的额外结构以及源域训练数据的缺失,这就带来了挑战。为了应对这些挑战,在这项工作中,我们试图以一种统一的方式,将一个 "现成的 "训练有素的分割模型逐步演化为具有额外解剖类别的多样化数据集。具体来说,我们首先提出了一个具有发散意识的双流模块,该模块具有平衡的刚性和可塑性分支,可将新旧任务分离开来,并以连续批量重归一化为指导。然后,我们开发了一种具有自熵正则化动量混合衰减的互补伪标签训练方案,用于自适应网络优化。我们在目标领域不断变化的脑肿瘤分割任务上评估了我们的框架,即具有增量结构的新磁共振成像扫描仪/模态。我们的框架能够很好地保留先前学习到的结构的可辨别性,因此,随着医疗大数据的广泛积累,能够实现现实的终身分割模型扩展。
{"title":"Incremental Learning for Heterogeneous Structure Segmentation in Brain Tumor MRI.","authors":"Xiaofeng Liu, Helen A Shih, Fangxu Xing, Emiliano Santarnecchi, Georges El Fakhri, Jonghye Woo","doi":"10.1007/978-3-031-43895-0_5","DOIUrl":"https://doi.org/10.1007/978-3-031-43895-0_5","url":null,"abstract":"<p><p>Deep learning (DL) models for segmenting various anatomical structures have achieved great success via a static DL model that is trained in a single source domain. Yet, the static DL model is likely to perform poorly in a continually evolving environment, requiring appropriate model updates. In an incremental learning setting, we would expect that well-trained static models are updated, following continually evolving target domain data-e.g., additional lesions or structures of interest-collected from different sites, without catastrophic forgetting. This, however, poses challenges, due to distribution shifts, additional structures not seen during the initial model training, and the absence of training data in a source domain. To address these challenges, in this work, we seek to progressively evolve an \"off-the-shelf\" trained segmentation model to diverse datasets with additional anatomical categories in a unified manner. Specifically, we first propose a divergence-aware dual-flow module with balanced rigidity and plasticity branches to decouple old and new tasks, which is guided by continuous batch renormalization. Then, a complementary pseudo-label training scheme with self-entropy regularized momentum MixUp decay is developed for adaptive network optimization. We evaluated our framework on a brain tumor segmentation task with continually changing target domains-i.e., new MRI scanners/modalities with incremental structures. Our framework was able to well retain the discriminability of previously learned structures, hence enabling the realistic life-long segmentation model extension along with the widespread accumulation of big medical data.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"46-56"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11045038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140869740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An AI-Ready Multiplex Staining Dataset for Reproducible and Accurate Characterization of Tumor Immune Microenvironment. 用于肿瘤免疫微环境再现和精确表征的AI就绪多重染色数据集。
Parmida Ghahremani, Joseph Marino, Juan Hernandez-Prera, Janis V de la Iglesia, Robbert Jc Slebos, Christine H Chung, Saad Nadeem

We introduce a new AI-ready computational pathology dataset containing restained and co-registered digitized images from eight head-and-neck squamous cell carcinoma patients. Specifically, the same tumor sections were stained with the expensive multiplex immunofluorescence (mIF) assay first and then restained with cheaper multiplex immunohistochemistry (mIHC). This is a first public dataset that demonstrates the equivalence of these two staining methods which in turn allows several use cases; due to the equivalence, our cheaper mIHC staining protocol can offset the need for expensive mIF staining/scanning which requires highly-skilled lab technicians. As opposed to subjective and error-prone immune cell annotations from individual pathologists (disagreement > 50%) to drive SOTA deep learning approaches, this dataset provides objective immune and tumor cell annotations via mIF/mIHC restaining for more reproducible and accurate characterization of tumor immune microenvironment (e.g. for immunotherapy). We demonstrate the effectiveness of this dataset in three use cases: (1) IHC quantification of CD3/CD8 tumor-infiltrating lymphocytes via style transfer, (2) virtual translation of cheap mIHC stains to more expensive mIF stains, and (3) virtual tumor/immune cellular phenotyping on standard hematoxylin images. The dataset is available at https://github.com/nadeemlab/DeepLIIF.

我们介绍了一个新的人工智能计算病理学数据集,其中包含来自八名头颈部鳞状细胞癌患者的重新存储和共同注册的数字化图像。具体而言,相同的肿瘤切片首先用昂贵的多重免疫荧光(mIF)分析染色,然后用廉价的多重免疫组织化学(mIHC)重新染色。这是第一个公开的数据集,证明了这两种染色方法的等效性,这反过来又允许几个用例;由于等效性,我们更便宜的mIHC染色方案可以抵消对昂贵的mIF染色/扫描的需求,这需要高度熟练的实验室技术人员。与来自个体病理学家的主观和易出错的免疫细胞注释(分歧>50%)驱动SOTA深度学习方法相反,该数据集通过mIF/mIHC重新构建提供了客观的免疫和肿瘤细胞注释,以更可重复和准确地表征肿瘤免疫微环境(例如用于免疫治疗)。我们在三个用例中证明了该数据集的有效性:(1)通过风格转移对CD3/CD8肿瘤浸润淋巴细胞进行IHC定量,(2)将廉价的mIHC染色虚拟转化为更昂贵的mIF染色,以及(3)在标准苏木精图像上的虚拟肿瘤/免疫细胞表型。数据集位于https://github.com/nadeemlab/DeepLIIF.
{"title":"An AI-Ready Multiplex Staining Dataset for Reproducible and Accurate Characterization of Tumor Immune Microenvironment.","authors":"Parmida Ghahremani,&nbsp;Joseph Marino,&nbsp;Juan Hernandez-Prera,&nbsp;Janis V de la Iglesia,&nbsp;Robbert Jc Slebos,&nbsp;Christine H Chung,&nbsp;Saad Nadeem","doi":"10.1007/978-3-031-43987-2_68","DOIUrl":"10.1007/978-3-031-43987-2_68","url":null,"abstract":"<p><p>We introduce a new AI-ready computational pathology dataset containing restained and co-registered digitized images from eight head-and-neck squamous cell carcinoma patients. Specifically, the same tumor sections were stained with the expensive multiplex immunofluorescence (mIF) assay first and then restained with cheaper multiplex immunohistochemistry (mIHC). This is a first public dataset that demonstrates the equivalence of these two staining methods which in turn allows several use cases; due to the equivalence, our cheaper mIHC staining protocol can offset the need for expensive mIF staining/scanning which requires highly-skilled lab technicians. As opposed to subjective and error-prone immune cell annotations from individual pathologists (disagreement > 50%) to drive SOTA deep learning approaches, this dataset provides objective immune and tumor cell annotations via mIF/mIHC restaining for more reproducible and accurate characterization of tumor immune microenvironment (e.g. for immunotherapy). We demonstrate the effectiveness of this dataset in three use cases: (1) IHC quantification of CD3/CD8 tumor-infiltrating lymphocytes via style transfer, (2) virtual translation of cheap mIHC stains to more expensive mIF stains, and (3) virtual tumor/immune cellular phenotyping on standard hematoxylin images. The dataset is available at https://github.com/nadeemlab/DeepLIIF.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14225 ","pages":"704-713"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10571229/pdf/nihms-1933600.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41242890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bidirectional Mapping with Contrastive Learning on Multimodal Neuroimaging Data. 多模态神经成像数据的双向映射与对比学习
Kai Ye, Haoteng Tang, Siyuan Dai, Lei Guo, Johnny Yuehan Liu, Yalin Wang, Alex Leow, Paul M Thompson, Heng Huang, Liang Zhan

The modeling of the interaction between brain structure and function using deep learning techniques has yielded remarkable success in identifying potential biomarkers for different clinical phenotypes and brain diseases. However, most existing studies focus on one-way mapping, either projecting brain function to brain structure or inversely. This type of unidirectional mapping approach is limited by the fact that it treats the mapping as a one-way task and neglects the intrinsic unity between these two modalities. Moreover, when dealing with the same biological brain, mapping from structure to function and from function to structure yields dissimilar outcomes, highlighting the likelihood of bias in one-way mapping. To address this issue, we propose a novel bidirectional mapping model, named Bidirectional Mapping with Contrastive Learning (BMCL), to reduce the bias between these two unidirectional mappings via ROI-level contrastive learning. We evaluate our framework on clinical phenotype and neurodegenerative disease predictions using two publicly available datasets (HCP and OASIS). Our results demonstrate the superiority of BMCL compared to several state-of-the-art methods.

利用深度学习技术对大脑结构和功能之间的相互作用进行建模,在确定不同临床表型和脑部疾病的潜在生物标记物方面取得了巨大成功。然而,现有的大多数研究侧重于单向映射,要么将大脑功能投射到大脑结构,要么相反。这种单向映射方法的局限性在于,它将映射视为单向任务,忽视了这两种模式之间的内在统一性。此外,在处理同一个生物大脑时,从结构映射到功能和从功能映射到结构会产生不同的结果,这就凸显了单向映射可能存在的偏差。为了解决这个问题,我们提出了一个新颖的双向映射模型,名为双向映射对比学习(Bidirectional Mapping with Contrastive Learning,BMCL),通过 ROI 级别的对比学习来减少这两种单向映射之间的偏差。我们使用两个公开的数据集(HCP 和 OASIS)对我们的临床表型和神经退行性疾病预测框架进行了评估。我们的结果表明,与几种最先进的方法相比,BMCL 更胜一筹。
{"title":"Bidirectional Mapping with Contrastive Learning on Multimodal Neuroimaging Data.","authors":"Kai Ye, Haoteng Tang, Siyuan Dai, Lei Guo, Johnny Yuehan Liu, Yalin Wang, Alex Leow, Paul M Thompson, Heng Huang, Liang Zhan","doi":"10.1007/978-3-031-43898-1_14","DOIUrl":"10.1007/978-3-031-43898-1_14","url":null,"abstract":"<p><p>The modeling of the interaction between brain structure and function using deep learning techniques has yielded remarkable success in identifying potential biomarkers for different clinical phenotypes and brain diseases. However, most existing studies focus on one-way mapping, either projecting brain function to brain structure or inversely. This type of unidirectional mapping approach is limited by the fact that it treats the mapping as a one-way task and neglects the intrinsic unity between these two modalities. Moreover, when dealing with the same biological brain, mapping from structure to function and from function to structure yields dissimilar outcomes, highlighting the likelihood of bias in one-way mapping. To address this issue, we propose a novel bidirectional mapping model, named Bidirectional Mapping with Contrastive Learning (BMCL), to reduce the bias between these two unidirectional mappings via ROI-level contrastive learning. We evaluate our framework on clinical phenotype and neurodegenerative disease predictions using two publicly available datasets (HCP and OASIS). Our results demonstrate the superiority of BMCL compared to several state-of-the-art methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14222 ","pages":"138-148"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11245326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical Contrast. ACTION++:利用自适应解剖对比度改进半监督医学图像分割。
Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, Jas Sekhon, James S Duncan

Medical data often exhibits long-tail distributions with heavy class imbalance, which naturally leads to difficulty in classifying the minority classes (i.e., boundary regions or rare objects). Recent work has significantly improved semi-supervised medical image segmentation in long-tailed scenarios by equipping them with unsupervised contrastive criteria. However, it remains unclear how well they will perform in the labeled portion of data where class distribution is also highly imbalanced. In this work, we present ACTION++, an improved contrastive learning framework with adaptive anatomical contrast for semi-supervised medical segmentation. Specifically, we propose an adaptive supervised contrastive loss, where we first compute the optimal locations of class centers uniformly distributed on the embedding space (i.e., off-line), and then perform online contrastive matching training by encouraging different class features to adaptively match these distinct and uniformly distributed class centers. Moreover, we argue that blindly adopting a constant temperature τ in the contrastive loss on long-tailed medical data is not optimal, and propose to use a dynamic τ via a simple cosine schedule to yield better separation between majority and minority classes. Empirically, we evaluate ACTION++ on ACDC and LA benchmarks and show that it achieves state-of-the-art across two semi-supervised settings. Theoretically, we analyze the performance of adaptive anatomical contrast and confirm its superiority in label efficiency.

医学数据通常呈现长尾分布,类不平衡现象严重,这自然会导致难以对少数类(即边界区域或稀有物体)进行分类。最近的工作通过为半监督医疗图像分割配备无监督对比标准,大大改进了长尾情况下的半监督医疗图像分割。然而,目前仍不清楚它们在类分布高度不平衡的标注数据部分的表现如何。在这项工作中,我们提出了 ACTION++,这是一种改进的对比度学习框架,具有用于半监督医疗分割的自适应解剖对比度。具体来说,我们提出了一种自适应监督对比损失,即首先计算均匀分布在嵌入空间上的类中心的最佳位置(即离线),然后通过鼓励不同的类特征自适应地匹配这些不同且均匀分布的类中心来进行在线对比匹配训练。此外,我们认为在长尾医疗数据的对比度损失中盲目采用恒定温度τ并不是最佳选择,并建议通过简单的余弦调度使用动态τ来更好地分离多数类和少数类。经验上,我们在 ACDC 和 LA 基准上对 ACTION++ 进行了评估,结果表明它在两种半监督设置中都达到了最先进的水平。从理论上讲,我们分析了自适应解剖对比度的性能,并证实了它在标签效率方面的优势。
{"title":"ACTION++: Improving Semi-supervised Medical Image Segmentation with Adaptive Anatomical Contrast.","authors":"Chenyu You, Weicheng Dai, Yifei Min, Lawrence Staib, Jas Sekhon, James S Duncan","doi":"10.1007/978-3-031-43901-8_19","DOIUrl":"10.1007/978-3-031-43901-8_19","url":null,"abstract":"<p><p>Medical data often exhibits long-tail distributions with heavy class imbalance, which naturally leads to difficulty in classifying the minority classes (<i>i.e</i>., boundary regions or rare objects). Recent work has significantly improved semi-supervised medical image segmentation in long-tailed scenarios by equipping them with unsupervised contrastive criteria. However, it remains unclear how well they will perform in the labeled portion of data where class distribution is also highly imbalanced. In this work, we present <b>ACTION++</b>, an improved contrastive learning framework with adaptive anatomical contrast for semi-supervised medical segmentation. Specifically, we propose an adaptive supervised contrastive loss, where we first compute the optimal locations of class centers uniformly distributed on the embedding space (<i>i.e</i>., off-line), and then perform online contrastive matching training by encouraging different class features to adaptively match these distinct and uniformly distributed class centers. Moreover, we argue that blindly adopting a <i>constant</i> temperature <math><mi>τ</mi></math> in the contrastive loss on long-tailed medical data is not optimal, and propose to use a <i>dynamic</i> <math><mi>τ</mi></math> via a simple cosine schedule to yield better separation between majority and minority classes. Empirically, we evaluate ACTION++ on ACDC and LA benchmarks and show that it achieves state-of-the-art across two semi-supervised settings. Theoretically, we analyze the performance of adaptive anatomical contrast and confirm its superiority in label efficiency.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14223 ","pages":"194-205"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11136572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141177034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech Audio Synthesis from Tagged MRI and Non-Negative Matrix Factorization via Plastic Transformer. 通过塑料变压器从标记磁共振成像和非负矩阵因式分解合成语音音频
Xiaofeng Liu, Fangxu Xing, Maureen Stone, Jiachen Zhuo, Sidney Fels, Jerry L Prince, Georges El Fakhri, Jonghye Woo

The tongue's intricate 3D structure, comprising localized functional units, plays a crucial role in the production of speech. When measured using tagged MRI, these functional units exhibit cohesive displacements and derived quantities that facilitate the complex process of speech production. Non-negative matrix factorization-based approaches have been shown to estimate the functional units through motion features, yielding a set of building blocks and a corresponding weighting map. Investigating the link between weighting maps and speech acoustics can offer significant insights into the intricate process of speech production. To this end, in this work, we utilize two-dimensional spectrograms as a proxy representation, and develop an end-to-end deep learning framework for translating weighting maps to their corresponding audio waveforms. Our proposed plastic light transformer (PLT) framework is based on directional product relative position bias and single-level spatial pyramid pooling, thus enabling flexible processing of weighting maps with variable size to fixed-size spectrograms, without input information loss or dimension expansion. Additionally, our PLT framework efficiently models the global correlation of wide matrix input. To improve the realism of our generated spectrograms with relatively limited training samples, we apply pair-wise utterance consistency with Maximum Mean Discrepancy constraint and adversarial training. Experimental results on a dataset of 29 subjects speaking two utterances demonstrated that our framework is able to synthesize speech audio waveforms from weighting maps, outperforming conventional convolution and transformer models.

舌头的三维结构错综复杂,由局部功能单元组成,在语音生成过程中起着至关重要的作用。使用标记磁共振成像测量时,这些功能单元会显示出内聚位移和衍生量,从而促进复杂的语音生成过程。研究表明,基于非负矩阵因式分解的方法可以通过运动特征来估计功能单元,从而得到一组构件和相应的加权图。研究加权图与语音声学之间的联系,可为了解复杂的语音生成过程提供重要启示。为此,在这项工作中,我们利用二维频谱图作为代理表示,并开发了一个端到端的深度学习框架,用于将加权图转换为相应的音频波形。我们提出的塑光变换器(PLT)框架基于方向积相对位置偏置和单级空间金字塔池化,因此能够将大小可变的加权图灵活处理为固定大小的频谱图,而不会造成输入信息丢失或维度扩展。此外,我们的 PLT 框架还能有效模拟宽矩阵输入的全局相关性。为了在训练样本相对有限的情况下提高生成的频谱图的真实度,我们采用了带有最大均值差异约束和对抗训练的成对语篇一致性。在 29 个受试者说两个语篇的数据集上进行的实验结果表明,我们的框架能够根据加权图合成语音音频波形,优于传统的卷积和变换模型。
{"title":"Speech Audio Synthesis from Tagged MRI and Non-Negative Matrix Factorization via Plastic Transformer.","authors":"Xiaofeng Liu, Fangxu Xing, Maureen Stone, Jiachen Zhuo, Sidney Fels, Jerry L Prince, Georges El Fakhri, Jonghye Woo","doi":"10.1007/978-3-031-43990-2_41","DOIUrl":"https://doi.org/10.1007/978-3-031-43990-2_41","url":null,"abstract":"<p><p>The tongue's intricate 3D structure, comprising localized functional units, plays a crucial role in the production of speech. When measured using tagged MRI, these functional units exhibit cohesive displacements and derived quantities that facilitate the complex process of speech production. Non-negative matrix factorization-based approaches have been shown to estimate the functional units through motion features, yielding a set of building blocks and a corresponding weighting map. Investigating the link between weighting maps and speech acoustics can offer significant insights into the intricate process of speech production. To this end, in this work, we utilize two-dimensional spectrograms as a proxy representation, and develop an end-to-end deep learning framework for translating weighting maps to their corresponding audio waveforms. Our proposed plastic light transformer (PLT) framework is based on directional product relative position bias and single-level spatial pyramid pooling, thus enabling flexible processing of weighting maps with variable size to fixed-size spectrograms, without input information loss or dimension expansion. Additionally, our PLT framework efficiently models the global correlation of wide matrix input. To improve the realism of our generated spectrograms with relatively limited training samples, we apply pair-wise utterance consistency with Maximum Mean Discrepancy constraint and adversarial training. Experimental results on a dataset of 29 subjects speaking two utterances demonstrated that our framework is able to synthesize speech audio waveforms from weighting maps, outperforming conventional convolution and transformer models.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14226 ","pages":"435-445"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11034915/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140863045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Patch-based Normality Assessment of Brain Atrophy in Alzheimer's Disease. 基于贴片的阿尔茨海默病脑萎缩正常性个性化评估
Jianwei Zhang, Yonggang Shi

Cortical thickness is an important biomarker associated with gray matter atrophy in neurodegenerative diseases. In order to conduct meaningful comparisons of cortical thickness between different subjects, it is imperative to establish correspondence among surface meshes. Conventional methods achieve this by projecting surface onto canonical domains such as the unit sphere or averaging feature values in anatomical regions of interest (ROIs). However, due to the natural variability in cortical topography, perfect anatomically meaningful one-to-one mapping can be hardly achieved and the practice of averaging leads to the loss of detailed information. For example, two subjects may have different number of gyral structures in the same region, and thus mapping can result in gyral/sulcal mismatch which introduces noise and averaging in detailed local information loss. Therefore, it is necessary to develop new method that can overcome these intrinsic problems to construct more meaningful comparison for atrophy detection. To address these limitations, we propose a novel personalized patch-based method to improve cortical thickness comparison across subjects. Our model segments the brain surface into patches based on gyral and sulcal structures to reduce mismatches in mapping method while still preserving detailed topological information which is potentially discarded in averaging. Moreover,the personalized templates for each patch account for the variability of folding patterns, as not all subjects are comparable. Finally, through normality assessment experiments, we demonstrate that our model performs better than standard spherical registration in detecting atrophy in patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD).

皮质厚度是神经退行性疾病中与灰质萎缩相关的重要生物标志物。为了对不同受试者的皮层厚度进行有意义的比较,必须建立表面网格之间的对应关系。传统方法通过将表面投影到典型域(如单位球面)上或平均解剖学感兴趣区(ROI)的特征值来实现这一目的。然而,由于大脑皮层地形的天然可变性,很难实现完美的解剖学意义上的一对一映射,而且平均值的做法会导致详细信息的丢失。例如,两个受试者在同一区域的回旋结构数量可能不同,因此映射可能导致回旋/丘脑不匹配,从而引入噪声和平均化,导致局部详细信息丢失。因此,有必要开发新的方法来克服这些内在问题,从而为萎缩检测构建更有意义的比较。为了解决这些局限性,我们提出了一种新颖的基于贴片的个性化方法,以改善不同受试者的皮层厚度比较。我们的模型根据脑回和脑沟结构将大脑表面分割成不同的斑块,以减少映射方法中的不匹配,同时还保留了平均化过程中可能被忽略的详细拓扑信息。此外,由于并非所有受试者都具有可比性,每个斑块的个性化模板还考虑到了折叠模式的可变性。最后,我们通过正态性评估实验证明,在检测轻度认知障碍(MCI)和阿尔茨海默病(AD)患者的萎缩方面,我们的模型比标准球形配准效果更好。
{"title":"Personalized Patch-based Normality Assessment of Brain Atrophy in Alzheimer's Disease.","authors":"Jianwei Zhang, Yonggang Shi","doi":"10.1007/978-3-031-43904-9_6","DOIUrl":"10.1007/978-3-031-43904-9_6","url":null,"abstract":"<p><p>Cortical thickness is an important biomarker associated with gray matter atrophy in neurodegenerative diseases. In order to conduct meaningful comparisons of cortical thickness between different subjects, it is imperative to establish correspondence among surface meshes. Conventional methods achieve this by projecting surface onto canonical domains such as the unit sphere or averaging feature values in anatomical regions of interest (ROIs). However, due to the natural variability in cortical topography, perfect anatomically meaningful one-to-one mapping can be hardly achieved and the practice of averaging leads to the loss of detailed information. For example, two subjects may have different number of gyral structures in the same region, and thus mapping can result in gyral/sulcal mismatch which introduces noise and averaging in detailed local information loss. Therefore, it is necessary to develop new method that can overcome these intrinsic problems to construct more meaningful comparison for atrophy detection. To address these limitations, we propose a novel personalized patch-based method to improve cortical thickness comparison across subjects. Our model segments the brain surface into patches based on gyral and sulcal structures to reduce mismatches in mapping method while still preserving detailed topological information which is potentially discarded in averaging. Moreover,the personalized templates for each patch account for the variability of folding patterns, as not all subjects are comparable. Finally, through normality assessment experiments, we demonstrate that our model performs better than standard spherical registration in detecting atrophy in patients with mild cognitive impairment (MCI) and Alzheimer's disease (AD).</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14224 ","pages":"55-62"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948101/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncovering Heterogeneity in Alzheimer's Disease from Graphical Modeling of the Tau Spatiotemporal Topography. 从 Tau 时空地形图的图形建模中揭示阿尔茨海默病的异质性
Jiaxin Yue, Yonggang Shi

Growing evidence from post-mortem and in vivo studies have demonstrated the substantial variability of tau pathology spreading patterns in Alzheimer's disease(AD). Automated tools for characterizing the heterogeneity of tau pathology will enable a more accurate understanding of the disease and help the development of targeted treatment. In this paper, we propose a Reeb graph representation of tau pathology topography on cortical surfaces using tau PET imaging data. By comparing the spatial and temporal coherence of the Reeb graph representation across subjects, we can build a directed graph to represent the distribution of tau topography over a population, which naturally facilitates the discovery of spatiotemporal subtypes of tau pathology with graph-based clustering. In our experiments, we conducted extensive comparisons with state-of-the-art event-based model on synthetic and large-scale tau PET imaging data from ADNI3 and A4 studies. We demonstrated that our proposed method can more robustly achieve the subtyping of tau pathology with clear clinical significance and demonstrated superior generalization performance than event-based model.

越来越多的尸检和活体研究证据表明,阿尔茨海默病(AD)中的tau病理学扩散模式具有很大的差异性。表征 Tau 病理学异质性的自动化工具将有助于更准确地了解这种疾病,并有助于开发有针对性的治疗方法。在本文中,我们利用 tau PET 成像数据,提出了皮质表面 tau 病理拓扑的 Reeb 图表示法。通过比较不同受试者的Reeb图表示的空间和时间一致性,我们可以建立一个有向图来表示人群中的tau拓扑分布,这自然有助于通过基于图的聚类发现tau病理的时空亚型。在实验中,我们对来自 ADNI3 和 A4 研究的合成和大规模 tau PET 成像数据与最先进的基于事件的模型进行了广泛的比较。结果表明,与基于事件的模型相比,我们提出的方法能更稳健地实现具有明确临床意义的 tau 病理学亚型,并表现出更优越的泛化性能。
{"title":"Uncovering Heterogeneity in Alzheimer's Disease from Graphical Modeling of the Tau Spatiotemporal Topography.","authors":"Jiaxin Yue, Yonggang Shi","doi":"10.1007/978-3-031-43904-9_26","DOIUrl":"10.1007/978-3-031-43904-9_26","url":null,"abstract":"<p><p>Growing evidence from post-mortem and in vivo studies have demonstrated the substantial variability of tau pathology spreading patterns in Alzheimer's disease(AD). Automated tools for characterizing the heterogeneity of tau pathology will enable a more accurate understanding of the disease and help the development of targeted treatment. In this paper, we propose a Reeb graph representation of tau pathology topography on cortical surfaces using tau PET imaging data. By comparing the spatial and temporal coherence of the Reeb graph representation across subjects, we can build a directed graph to represent the distribution of tau topography over a population, which naturally facilitates the discovery of spatiotemporal subtypes of tau pathology with graph-based clustering. In our experiments, we conducted extensive comparisons with state-of-the-art event-based model on synthetic and large-scale tau PET imaging data from ADNI3 and A4 studies. We demonstrated that our proposed method can more robustly achieve the subtyping of tau pathology with clear clinical significance and demonstrated superior generalization performance than event-based model.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14224 ","pages":"262-271"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10951551/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140178532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1