首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Unified Embeddings of Structural and Functional Connectome via a Function-Constrained Structural Graph Variational Auto-Encoder. 通过函数约束结构图变异自动编码器统一嵌入结构和功能连接组
Carlo Amodeo, Igor Fortel, Olusola Ajilore, Liang Zhan, Alex Leow, Theja Tulabandhula

Graph theoretical analyses have become standard tools in modeling functional and anatomical connectivity in the brain. With the advent of connectomics, the primary graphs or networks of interest are structural connectome (derived from DTI tractography) and functional connectome (derived from resting-state fMRI). However, most published connectome studies have focused on either structural or functional connectome, yet complementary information between them, when available in the same dataset, can be jointly leveraged to improve our understanding of the brain. To this end, we propose a function-constrained structural graph variational autoencoder (FCS-GVAE) capable of incorporating information from both functional and structural connectome in an unsupervised fashion. This leads to a joint low-dimensional embedding that establishes a unified spatial coordinate system for comparing across different subjects. We evaluate our approach using the publicly available OASIS-3 Alzheimer's disease (AD) dataset and show that a variational formulation is necessary to optimally encode functional brain dynamics. Further, the proposed joint embedding approach can more accurately distinguish different patient sub-populations than approaches that do not use complementary connectome information.

图论分析已成为大脑功能和解剖连接建模的标准工具。随着连接组学的出现,人们关注的主要图或网络是结构连接组(来自 DTI tractography)和功能连接组(来自静息态 fMRI)。然而,大多数已发表的连接组学研究都集中在结构连接组或功能连接组上,但如果在同一数据集中可以获得两者之间的互补信息,则可以共同利用这些信息来提高我们对大脑的理解。为此,我们提出了一种功能约束结构图变异自动编码器(FCS-GVAE),它能以无监督的方式将功能和结构连接组的信息结合起来。这将产生一个联合低维嵌入,从而建立一个统一的空间坐标系,用于比较不同的研究对象。我们使用公开的 OASIS-3 阿尔茨海默病(AD)数据集对我们的方法进行了评估,结果表明,要想对大脑功能动态进行最佳编码,必须采用变异公式。此外,与不使用互补连接组信息的方法相比,我们提出的联合嵌入方法能更准确地区分不同的患者亚群。
{"title":"Unified Embeddings of Structural and Functional Connectome via a Function-Constrained Structural Graph Variational Auto-Encoder.","authors":"Carlo Amodeo, Igor Fortel, Olusola Ajilore, Liang Zhan, Alex Leow, Theja Tulabandhula","doi":"10.1007/978-3-031-16431-6_39","DOIUrl":"10.1007/978-3-031-16431-6_39","url":null,"abstract":"<p><p>Graph theoretical analyses have become standard tools in modeling functional and anatomical connectivity in the brain. With the advent of connectomics, the primary graphs or networks of interest are structural connectome (derived from DTI tractography) and functional connectome (derived from resting-state fMRI). However, most published connectome studies have focused on either structural or functional connectome, yet complementary information between them, when available in the same dataset, can be jointly leveraged to improve our understanding of the brain. To this end, we propose a function-constrained structural graph variational autoencoder (FCS-GVAE) capable of incorporating information from both functional and structural connectome in an unsupervised fashion. This leads to a joint low-dimensional embedding that establishes a unified spatial coordinate system for comparing across different subjects. We evaluate our approach using the publicly available OASIS-3 Alzheimer's disease (AD) dataset and show that a variational formulation is necessary to optimally encode functional brain dynamics. Further, the proposed joint embedding approach can more accurately distinguish different patient sub-populations than approaches that do not use complementary connectome information.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"13431 ","pages":"406-415"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11246745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-Aware Replacements for Supervised Contrastive Learning in Detection of Alzheimer's Disease. 用脑感知替代监督对比学习检测阿尔茨海默病
Mehmet Saygın Seyfioğlu, Zixuan Liu, Pranav Kamath, Sadjyot Gangolli, Sheng Wang, Thomas Grabowski, Linda Shapiro

We propose a novel framework for Alzheimer's disease (AD) detection using brain MRIs. The framework starts with a data augmentation method called Brain-Aware Replacements (BAR), which leverages a standard brain parcellation to replace medically-relevant 3D brain regions in an anchor MRI from a randomly picked MRI to create synthetic samples. Ground truth "hard" labels are also linearly mixed depending on the replacement ratio in order to create "soft" labels. BAR produces a great variety of realistic-looking synthetic MRIs with higher local variability compared to other mix-based methods, such as CutMix. On top of BAR, we propose using a soft-label-capable supervised contrastive loss, aiming to learn the relative similarity of representations that reflect how mixed are the synthetic MRIs using our soft labels. This way, we do not fully exhaust the entropic capacity of our hard labels, since we only use them to create soft labels and synthetic MRIs through BAR. We show that a model pre-trained using our framework can be further fine-tuned with a cross-entropy loss using the hard labels that were used to create the synthetic samples. We validated the performance of our framework in a binary AD detection task against both from-scratch supervised training and state-of-the-art self-supervised training plus fine-tuning approaches. Then we evaluated BAR's individual performance compared to another mix-based method CutMix by integrating it within our framework. We show that our framework yields superior results in both precision and recall for the AD detection task.

我们提出了一种利用脑部核磁共振成像检测阿尔茨海默病(AD)的新型框架。该框架以一种名为 "脑感知替换"(BAR)的数据增强方法为起点,利用标准的脑解析,从随机选取的磁共振成像中替换锚磁共振成像中与医学相关的三维脑区,从而创建合成样本。地面真实 "硬 "标签也会根据替换比例进行线性混合,以创建 "软 "标签。与其他基于混合的方法(如 CutMix)相比,BAR 能生成多种外观逼真的合成 MRI,且局部可变性更高。在 BAR 的基础上,我们建议使用一种具有软标签能力的监督对比损失,旨在学习表征的相对相似性,以反映使用我们的软标签的合成 MRI 的混合程度。这样,我们就不会完全耗尽硬标签的熵容量,因为我们只是通过 BAR 使用它们来创建软标签和合成磁共振成像。我们的研究表明,使用我们的框架预训练的模型可以通过使用用于创建合成样本的硬标签的交叉熵损失进行进一步微调。我们在二进制 AD 检测任务中验证了我们框架的性能,与从头开始的监督训练和最先进的自监督训练加微调方法进行了比较。然后,我们将 BAR 与另一种基于混合的方法 CutMix 整合到我们的框架中,评估了 BAR 的单独性能。结果表明,在 AD 检测任务中,我们的框架在精确度和召回率方面都取得了优异的成绩。
{"title":"Brain-Aware Replacements for Supervised Contrastive Learning in Detection of Alzheimer's Disease.","authors":"Mehmet Saygın Seyfioğlu, Zixuan Liu, Pranav Kamath, Sadjyot Gangolli, Sheng Wang, Thomas Grabowski, Linda Shapiro","doi":"10.1007/978-3-031-16431-6_44","DOIUrl":"https://doi.org/10.1007/978-3-031-16431-6_44","url":null,"abstract":"<p><p>We propose a novel framework for Alzheimer's disease (AD) detection using brain MRIs. The framework starts with a data augmentation method called Brain-Aware Replacements (BAR), which leverages a standard brain parcellation to replace medically-relevant 3D brain regions in an anchor MRI from a randomly picked MRI to create synthetic samples. Ground truth \"hard\" labels are also linearly mixed depending on the replacement ratio in order to create \"soft\" labels. BAR produces a great variety of realistic-looking synthetic MRIs with higher local variability compared to other mix-based methods, such as CutMix. On top of BAR, we propose using a soft-label-capable supervised contrastive loss, aiming to learn the relative similarity of representations that reflect how mixed are the synthetic MRIs using our soft labels. This way, we do not fully exhaust the entropic capacity of our hard labels, since we only use them to create soft labels and synthetic MRIs through BAR. We show that a model pre-trained using our framework can be further fine-tuned with a cross-entropy loss using the hard labels that were used to create the synthetic samples. We validated the performance of our framework in a binary AD detection task against both from-scratch supervised training and state-of-the-art self-supervised training plus fine-tuning approaches. Then we evaluated BAR's individual performance compared to another mix-based method CutMix by integrating it within our framework. We show that our framework yields superior results in both precision and recall for the AD detection task.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"13431 ","pages":"461-470"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11056282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140859527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest X-rays. 胸部 X 光片中的解剖学引导弱监督异常定位技术
Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Kayhan Batmanghelich

Creating a large-scale dataset of abnormality annotation on medical images is a labor-intensive and costly task. Leveraging weak supervision from readily available data such as radiology reports can compensate lack of large-scale data for anomaly detection methods. However, most of the current methods only use image-level pathological observations, failing to utilize the relevant anatomy mentions in reports. Furthermore, Natural Language Processing (NLP)-mined weak labels are noisy due to label sparsity and linguistic ambiguity. We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address these issues of weak annotation. Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations. The critical component in our framework is an anatomy-guided attention module that aids the downstream observation network in focusing on the relevant anatomical regions generated by the anatomy network. We use Positive Unlabeled (PU) learning to account for the fact that lack of mention does not necessarily mean a negative label. Our quantitative and qualitative results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization. Experiments on the NIH Chest X-ray dataset show that the learned feature representations are transferable and can achieve the state-of-the-art performances in disease classification and competitive disease localization results. Our code is available at https://github.com/batmanlab/AGXNet.

创建大规模的医学图像异常注释数据集是一项耗费大量人力和财力的任务。利用放射学报告等现成数据的弱监督可以弥补异常检测方法缺乏大规模数据的不足。然而,目前的大多数方法只使用图像层面的病理观察结果,无法利用报告中提及的相关解剖结构。此外,由于标签稀疏性和语言模糊性,自然语言处理(NLP)挖掘出的弱标签存在噪声。我们提出了一种解剖学引导的胸部 X 光网络 (AGXNet),以解决这些弱注释问题。我们的框架由两个网络级联组成,一个负责识别解剖异常,另一个负责病理观察。我们框架中的关键组件是解剖引导注意模块,它能帮助下游观察网络关注解剖网络生成的相关解剖区域。我们使用正向无标记(PU)学习来说明一个事实,即缺乏提及并不一定意味着负面标签。我们在 MIMIC-CXR 数据集上的定量和定性结果证明了 AGXNet 在疾病和解剖异常定位方面的有效性。在美国国立卫生研究院胸部 X 光数据集上的实验表明,学习到的特征表征是可迁移的,并能在疾病分类和具有竞争力的疾病定位结果方面实现最先进的性能。我们的代码见 https://github.com/batmanlab/AGXNet。
{"title":"Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest X-rays.","authors":"Ke Yu, Shantanu Ghosh, Zhexiong Liu, Christopher Deible, Kayhan Batmanghelich","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Creating a large-scale dataset of abnormality annotation on medical images is a labor-intensive and costly task. Leveraging <i>weak supervision</i> from readily available data such as radiology reports can compensate lack of large-scale data for anomaly detection methods. However, most of the current methods only use image-level pathological observations, failing to utilize the relevant <i>anatomy mentions</i> in reports. Furthermore, Natural Language Processing (NLP)-mined weak labels are noisy due to label sparsity and linguistic ambiguity. We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address these issues of weak annotation. Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations. The critical component in our framework is an anatomy-guided attention module that aids the downstream observation network in focusing on the relevant anatomical regions generated by the anatomy network. We use Positive Unlabeled (PU) learning to account for the fact that lack of mention does not necessarily mean a negative label. Our quantitative and qualitative results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization. Experiments on the NIH Chest X-ray dataset show that the learned feature representations are transferable and can achieve the state-of-the-art performances in disease classification and competitive disease localization results. Our code is available at https://github.com/batmanlab/AGXNet.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"13435 ","pages":"658-668"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11215940/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Style Transfer Using Generative Adversarial Networks for Multi-Site MRI Harmonization. 利用生成式对抗网络进行风格转移,实现多站点核磁共振成像协调。
Mengting Liu, Piyush Maiti, Sophia Thomopoulos, Alyssa Zhu, Yaqiong Chai, Hosung Kim, Neda Jahanshad

Large data initiatives and high-powered brain imaging analyses require the pooling of MR images acquired across multiple scanners, often using different protocols. Prospective cross-site harmonization often involves the use of a phantom or traveling subjects. However, as more datasets are becoming publicly available, there is a growing need for retrospective harmonization, pooling data from sites not originally coordinated together. Several retrospective harmonization techniques have shown promise in removing cross-site image variation. However, most unsupervised methods cannot distinguish between image-acquisition based variability and cross-site population variability, so they require that datasets contain subjects or patient groups with similar clinical or demographic information. To overcome this limitation, we consider cross-site MRI image harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a reference image directly, without knowing their site/scanner labels a priori. We trained our model using data from five large-scale multi-site datasets with varied demographics. Results demonstrated that our style-encoding model can harmonize MR images, and match intensity profiles, successfully, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. Moreover, we further demonstrated that if we included diverse enough images into the training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising novel tool for ongoing collaborative studies.

大型数据计划和高能脑成像分析需要汇集在多个扫描仪上获取的磁共振图像,这些扫描仪通常使用不同的协议。前瞻性的跨站点协调通常需要使用模型或巡回受试者。然而,随着越来越多的数据集可以公开获取,对回顾性协调的需求也越来越大,即汇集来自最初没有协调在一起的研究机构的数据。有几种追溯协调技术在消除跨站点图像差异方面显示出了前景。然而,大多数无监督方法无法区分基于图像采集的变异和跨站点人群变异,因此它们要求数据集包含具有相似临床或人口信息的受试者或患者群体。为了克服这一局限性,我们将跨部位磁共振成像协调视为一个风格转移问题,而不是领域转移问题。通过使用基于生成式对抗网络(GAN)的完全无监督深度学习框架,我们证明了磁共振图像可以通过直接插入参考图像编码的风格信息来协调,而无需事先知道它们的部位/扫描仪标签。我们使用来自五个不同人口统计学的大规模多站点数据集的数据训练了我们的模型。结果表明,我们的风格编码模型可以成功地协调磁共振图像并匹配强度曲线,而不依赖于巡回受试者。该模型还避免了对临床、诊断或人口统计学信息的控制。此外,我们还进一步证明,如果在训练集中包含足够多的不同图像,我们的方法就能成功地协调从未曾见过的扫描仪和方案中收集到的磁共振图像,这为正在进行的合作研究提供了一种前景广阔的新型工具。
{"title":"Style Transfer Using Generative Adversarial Networks for Multi-Site MRI Harmonization.","authors":"Mengting Liu, Piyush Maiti, Sophia Thomopoulos, Alyssa Zhu, Yaqiong Chai, Hosung Kim, Neda Jahanshad","doi":"10.1007/978-3-030-87199-4_30","DOIUrl":"10.1007/978-3-030-87199-4_30","url":null,"abstract":"<p><p>Large data initiatives and high-powered brain imaging analyses require the pooling of MR images acquired across multiple scanners, often using different protocols. Prospective cross-site harmonization often involves the use of a phantom or traveling subjects. However, as more datasets are becoming publicly available, there is a growing need for retrospective harmonization, pooling data from sites not originally coordinated together. Several retrospective harmonization techniques have shown promise in removing cross-site image variation. However, most unsupervised methods cannot distinguish between image-acquisition based variability and cross-site population variability, so they require that datasets contain subjects or patient groups with similar clinical or demographic information. To overcome this limitation, we consider cross-site MRI image harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a reference image directly, without knowing their site/scanner labels <i>a priori</i>. We trained our model using data from five large-scale multi-site datasets with varied demographics. Results demonstrated that our style-encoding model can harmonize MR images, and match intensity profiles, successfully, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. Moreover, we further demonstrated that if we included diverse enough images into the training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising novel tool for ongoing collaborative studies.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"12903 ","pages":"313-322"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9137427/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139731376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cortical Surface Parcellation using Spherical Convolutional Neural Networks. 使用球面卷积神经网络进行皮层表面分割。
Prasanna Parvathaneni, Shunxing Bao, Vishwesh Nath, Neil D Woodward, Daniel O Claassen, Carissa J Cascio, David H Zald, Yuankai Huo, Bennett A Landman, Ilwoo Lyu

We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with slow processing speed on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method outperforms traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.

我们提出了使用球形深度卷积神经网络进行皮层表面分割。传统的多图谱皮层表面分割需要在单个受试者(2-3小时)上使用处理速度较慢的几何特征进行受试者间表面配准。此外,即使是最优的表面配准也不一定会产生最优的皮层分割,因为地块边界与几何特征并不完全匹配。在这种情况下,训练特征的选择对于准确的皮层分割很重要。为了有效利用网络,我们提出了来自不规则和复杂的皮层表面结构的皮层细分特定输入数据。为此,我们对齐地面实况皮层地块边界,并使用其产生的变形场来生成新的变形几何特征对和分割图。为了扩展网络的能力,我们使用中间变形场平滑地变形皮层几何特征和分割图。我们在427个成人大脑中对49个标签验证了我们的方法。实验结果表明,我们的方法优于传统的多图谱和朴素的球形U-Net方法,同时在不到一分钟的时间内实现了完整的皮层分割。
{"title":"Cortical Surface Parcellation using Spherical Convolutional Neural Networks.","authors":"Prasanna Parvathaneni,&nbsp;Shunxing Bao,&nbsp;Vishwesh Nath,&nbsp;Neil D Woodward,&nbsp;Daniel O Claassen,&nbsp;Carissa J Cascio,&nbsp;David H Zald,&nbsp;Yuankai Huo,&nbsp;Bennett A Landman,&nbsp;Ilwoo Lyu","doi":"10.1007/978-3-030-32248-9_56","DOIUrl":"https://doi.org/10.1007/978-3-030-32248-9_56","url":null,"abstract":"<p><p>We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with slow processing speed on a single subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method outperforms traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"11766 ","pages":"501-509"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6892466/pdf/nihms-1059107.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49687027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Active Appearance Model Induced Generative Adversarial Network for Controlled Data Augmentation. 用于受控数据增强的主动外观模型诱导的生成对抗性网络。
Jianfei Liu, Christine Shen, Tao Liu, Nancy Aguilera, Johnny Tam

Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.

在基于深度学习的医学图像分析中,数据扩充是扩充训练数据集的重要策略。这是因为大型带注释的医学数据集不仅难以生成且成本高昂,而且由于成像技术的快速进步,它们很快就会过时。图像到图像条件生成对抗性网络(C-GAN)为数据扩充提供了一种潜在的解决方案。然而,用作C-GAN的输入的注释通常仅基于形状信息,这可能在所产生的人工创建的图像中导致不期望的强度分布。在本文中,我们介绍了一种可以测量形状和强度的统计分布的活动细胞外观模型(ACAM),并使用该ACAM模型来指导C-GAN生成更逼真的图像,我们称之为A-GAN。A-GAN为向C-GAN传递各向异性强度信息提供了一种有效的手段。A-GAN结合了一个统计模型(ACAM)来确定如何将变换应用于数据扩充。基于任意变换的传统数据扩充方法可能会导致扩充数据集中不现实的形状变化,而这些变化不能代表真实数据。A-GAN旨在改善这种情况。为了验证使用A-GAN进行数据增强的有效性,我们评估了其在自适应光学视网膜成像中的细胞分析性能,这是一种快速变化的医学成像模式。与C-GAN相比,A-GAN在较少的迭代中实现了稳定性。A-GAN增强辅助下的细胞检测和分割精度高于C-GAN。这些发现证明了A-GAN在医学图像分析中显著改进现有数据增强方法的潜力。
{"title":"Active Appearance Model Induced Generative Adversarial Network for Controlled Data Augmentation.","authors":"Jianfei Liu,&nbsp;Christine Shen,&nbsp;Tao Liu,&nbsp;Nancy Aguilera,&nbsp;Johnny Tam","doi":"10.1007/978-3-030-32239-7_23","DOIUrl":"https://doi.org/10.1007/978-3-030-32239-7_23","url":null,"abstract":"<p><p>Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"11764 ","pages":"201-208"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6834374/pdf/nihms-1055537.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49687026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Pancreas Segmentation in MRI using Graph-Based Decision Fusion on Convolutional Neural Networks. 利用基于卷积神经网络的图式决策融合技术在核磁共振成像中进行胰腺分割
Jinzheng Cai, Le Lu, Zizhao Zhang, Fuyong Xing, Lin Yang, Qian Yin

Automated pancreas segmentation in medical images is a prerequisite for many clinical applications, such as diabetes inspection, pancreatic cancer diagnosis, and surgical planing. In this paper, we formulate pancreas segmentation in magnetic resonance imaging (MRI) scans as a graph based decision fusion process combined with deep convolutional neural networks (CNN). Our approach conducts pancreatic detection and boundary segmentation with two types of CNN models respectively: 1) the tissue detection step to differentiate pancreas and non-pancreas tissue with spatial intensity context; 2) the boundary detection step to allocate the semantic boundaries of pancreas. Both detection results of the two networks are fused together as the initialization of a conditional random field (CRF) framework to obtain the final segmentation output. Our approach achieves the mean dice similarity coefficient (DSC) 76.1% with the standard deviation of 8.7% in a dataset containing 78 abdominal MRI scans. The proposed algorithm achieves the best results compared with other state of the arts.

医学图像中的胰腺自动分割是糖尿病检查、胰腺癌诊断和外科手术等许多临床应用的先决条件。在本文中,我们将磁共振成像(MRI)扫描中的胰腺分割制定为基于图的决策融合过程,并与深度卷积神经网络(CNN)相结合。我们的方法分别使用两种 CNN 模型进行胰腺检测和边界分割:1) 组织检测步骤,利用空间强度上下文区分胰腺和非胰腺组织;2) 边界检测步骤,分配胰腺的语义边界。两个网络的检测结果融合在一起,作为条件随机场(CRF)框架的初始化,以获得最终的分割输出。我们的方法在包含 78 个腹部核磁共振扫描的数据集中取得了平均骰子相似系数(DSC)76.1%,标准偏差为 8.7%。与其他同类算法相比,我们提出的算法取得了最佳效果。
{"title":"Pancreas Segmentation in MRI using Graph-Based Decision Fusion on Convolutional Neural Networks.","authors":"Jinzheng Cai, Le Lu, Zizhao Zhang, Fuyong Xing, Lin Yang, Qian Yin","doi":"10.1007/978-3-319-46723-8_51","DOIUrl":"https://doi.org/10.1007/978-3-319-46723-8_51","url":null,"abstract":"<p><p>Automated pancreas segmentation in medical images is a prerequisite for many clinical applications, such as diabetes inspection, pancreatic cancer diagnosis, and surgical planing. In this paper, we formulate pancreas segmentation in magnetic resonance imaging (MRI) scans as a graph based decision fusion process combined with deep convolutional neural networks (CNN). Our approach conducts pancreatic detection and boundary segmentation with two types of CNN models respectively: 1) the tissue detection step to differentiate pancreas and non-pancreas tissue with spatial intensity context; 2) the boundary detection step to allocate the semantic boundaries of pancreas. Both detection results of the two networks are fused together as the initialization of a conditional random field (CRF) framework to obtain the final segmentation output. Our approach achieves the mean dice similarity coefficient (DSC) 76.1% with the standard deviation of 8.7% in a dataset containing 78 abdominal MRI scans. The proposed algorithm achieves the best results compared with other state of the arts.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"9901 ","pages":"442-450"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5223591/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders. 使用稀疏重构和堆叠去噪自动编码器在组织病理学图像中进行稳健的细胞检测和分割
Hai Su, Fuyong Xing, Xiangfei Kong, Yuanpu Xie, Shaoting Zhang, Lin Yang

Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.

计算机辅助诊断(CAD)是一种很有前途的工具,可用于准确、一致的诊断和预后。细胞检测和分割是计算机辅助诊断的重要步骤。由于细胞形状的变化、触摸细胞和杂乱的背景,这些任务具有挑战性。在本文中,我们提出了一种细胞检测和分割算法,该算法使用带有琐碎模板的稀疏重建和堆叠去噪自编码器(sDAE)。稀疏重构将检测片段表示为所学字典中形状的线性组合,从而处理形状变化。琐碎模板用于对接触部分进行建模。使用原始数据及其结构化标签训练的 sDAE 用于细胞分割。据我们所知,这是首次将稀疏重构和带有结构化标签的 sDAE 应用于细胞检测和分割的研究。我们在两个数据集上对所提出的方法进行了广泛测试,这两个数据集包含了从脑肿瘤和肺癌图像中获取的 3000 多个细胞。与其他同类技术相比,我们的算法取得了最佳性能。
{"title":"Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders.","authors":"Hai Su, Fuyong Xing, Xiangfei Kong, Yuanpu Xie, Shaoting Zhang, Lin Yang","doi":"10.1007/978-3-319-24574-4_46","DOIUrl":"https://doi.org/10.1007/978-3-319-24574-4_46","url":null,"abstract":"<p><p>Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"9351 ","pages":"383-390"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5081214/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140290109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Model-Based Segmentation of the Left and Right Ventricles in Tagged Cardiac MRI. 标记心脏MRI中左心室和右心室的自动模型分割。
Albert Montillo, Dimitris Metaxas, Leon Axel

We describe an automated, model-based method to segment the left and right ventricles in 4D tagged MR. We fit 3D epicardial and endocardial surface models to ventricle features we extract from the image data. Excellent segmentation is achieved using novel methods that (1) initialize the models and (2) that compute 3D model forces from 2D tagged MR images. The 3D forces guide the models to patient-specific anatomy while the fit is regularized via internal deformation strain energy of a thin plate. Deformation continues until the forces equilibrate or vanish. Validation of the segmentations is performed quantitatively and qualitatively on normal and diseased subjects.

我们描述了一种基于模型的自动方法,用于在4D标记的MR中分割左心室和右心室。我们将3D心外膜和心内膜表面模型与我们从图像数据中提取的心室特征相拟合。使用(1)初始化模型和(2)从2D标记的MR图像计算3D模型力的新方法实现了出色的分割。3D力将模型引导到患者特定的解剖结构,同时通过薄板的内部变形应变能来正则化拟合。变形持续到力平衡或消失。对正常和患病受试者进行分割的定量和定性验证。
{"title":"Automated Model-Based Segmentation of the Left and Right Ventricles in Tagged Cardiac MRI.","authors":"Albert Montillo,&nbsp;Dimitris Metaxas,&nbsp;Leon Axel","doi":"10.1007/978-3-540-39899-8_63","DOIUrl":"https://doi.org/10.1007/978-3-540-39899-8_63","url":null,"abstract":"<p><p>We describe an automated, model-based method to segment the left and right ventricles in 4D tagged MR. We fit 3D epicardial and endocardial surface models to ventricle features we extract from the image data. Excellent segmentation is achieved using novel methods that (1) initialize the models and (2) that compute 3D model forces from 2D tagged MR images. The 3D forces guide the models to patient-specific anatomy while the fit is regularized via internal deformation strain energy of a thin plate. Deformation continues until the forces equilibrate or vanish. Validation of the segmentations is performed quantitatively and qualitatively on normal and diseased subjects.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"2878 ","pages":"507-515"},"PeriodicalIF":0.0,"publicationDate":"2003-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-540-39899-8_63","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Automated Segmentation of the Left and Right Ventricles in 4D Cardiac SPAMM Images. 4D心脏SPAMM图像中左心室和右心室的自动分割。
Albert Montillo, Dimitris Metaxas, Leon Axel

In this paper we describe a completely automated volume-based method for the segmentation of the left and right ventricles in 4D tagged MR (SPAMM) images for quantitative cardiac analysis. We correct the background intensity variation in each volume caused by surface coils using a new scale-based fuzzy connectedness procedure. We apply 3D grayscale opening to the corrected data to create volumes containing only the blood filled regions. We threshold the volumes by minimizing region variance or by an adaptive statistical thresholding method. We isolate the ventricular blood filled regions using a novel approach based on spatial and temporal shape similarity. We use these regions to define the endocardium contours and use them to initialize an active contour that locates the epicardium through the gradient vector flow of an edgemap of a grayscale-closed image. Both quantitative and qualitative results on normal and diseased patients are presented.

在本文中,我们描述了一种完全自动的基于体积的方法,用于在4D标记的MR(SPAMM)图像中分割左心室和右心室,用于定量心脏分析。我们使用一种新的基于尺度的模糊连通性程序来校正由表面线圈引起的每个体积中的背景强度变化。我们将3D灰度开口应用于校正后的数据,以创建仅包含血液填充区域的体积。我们通过最小化区域方差或通过自适应统计阈值方法来对体积进行阈值设置。我们使用一种基于空间和时间形状相似性的新方法来分离心室充满血液的区域。我们使用这些区域来定义心内膜轮廓,并使用它们来初始化通过灰度闭合图像的边缘图的梯度矢量流定位心外膜的活动轮廓。给出了正常和患病患者的定量和定性结果。
{"title":"Automated Segmentation of the Left and Right Ventricles in 4D Cardiac SPAMM Images.","authors":"Albert Montillo,&nbsp;Dimitris Metaxas,&nbsp;Leon Axel","doi":"10.1007/3-540-45786-0_77","DOIUrl":"https://doi.org/10.1007/3-540-45786-0_77","url":null,"abstract":"<p><p>In this paper we describe a completely automated volume-based method for the segmentation of the left and right ventricles in 4D tagged MR (SPAMM) images for quantitative cardiac analysis. We correct the background intensity variation in each volume caused by surface coils using a new scale-based fuzzy connectedness procedure. We apply 3D grayscale opening to the corrected data to create volumes containing only the blood filled regions. We threshold the volumes by minimizing region variance or by an adaptive statistical thresholding method. We isolate the ventricular blood filled regions using a novel approach based on spatial and temporal shape similarity. We use these regions to define the endocardium contours and use them to initialize an active contour that locates the epicardium through the gradient vector flow of an edgemap of a grayscale-closed image. Both quantitative and qualitative results on normal and diseased patients are presented.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"2488 ","pages":"620-633"},"PeriodicalIF":0.0,"publicationDate":"2002-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/3-540-45786-0_77","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49687025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1