首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation. 用常微分方程对大脑结构-效应网络进行可解释的时空嵌入
Haoteng Tang, Guodong Liu, Siyuan Dai, Kai Ye, Kun Zhao, Wenlu Wang, Carl Yang, Lifang He, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan

The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics. In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic inter-play between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.

核磁共振成像(MRI)衍生的大脑网络是阐明大脑结构和功能方面的重要工具,包括疾病和发育过程的影响。然而,现有的方法通常侧重于功能磁共振成像(fMRI)的同步BOLD信号,可能无法捕捉到脑区之间的定向影响,也很少处理时间功能动态。在本研究中,我们首先通过动态因果模型构建了脑效网络。随后,我们引入了一个可解释的图学习框架,称为时空嵌入式 ODE(STE-ODE)。该框架包含专门设计的有向节点嵌入层,旨在通过常微分方程(ODE)模型捕捉结构网络和有效网络之间的动态相互作用,从而描述大脑的时空动态。我们的框架利用两个独立的公开数据集(HCP 和 OASIS)在多个临床表型预测任务中进行了验证。实验结果清楚地表明,与几种最先进的方法相比,我们的模型更具优势。
{"title":"Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation.","authors":"Haoteng Tang, Guodong Liu, Siyuan Dai, Kai Ye, Kun Zhao, Wenlu Wang, Carl Yang, Lifang He, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan","doi":"10.1007/978-3-031-72069-7_22","DOIUrl":"10.1007/978-3-031-72069-7_22","url":null,"abstract":"<p><p>The MRI-derived brain network serves as a pivotal instrument in elucidating both the structural and functional aspects of the brain, encompassing the ramifications of diseases and developmental processes. However, prevailing methodologies, often focusing on synchronous BOLD signals from functional MRI (fMRI), may not capture directional influences among brain regions and rarely tackle temporal functional dynamics. In this study, we first construct the brain-effective network via the dynamic causal model. Subsequently, we introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE). This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic inter-play between structural and effective networks via an ordinary differential equation (ODE) model, which characterizes spatial-temporal brain dynamics. Our framework is validated on several clinical phenotype prediction tasks using two independent publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"227-237"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513182/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection. 用于不确定性感知前列腺癌检测的跨片注意力和证据临界损失。
Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung

Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.

目前基于深度学习的模型通常分析二维或三维医学图像,但会忽略容积信息,或因磁共振数据的各向异性分辨率而导致性能不达标。此外,提供准确的不确定性估计对临床医生也有好处,因为这表明了模型对其预测的信心程度。我们提出了一种新型 2.5D 交叉切片注意力模型,该模型利用全局和局部信息以及证据临界损失来执行证据深度学习,以检测 MR 图像中的前列腺癌,前列腺癌是最常见的癌症之一,也是男性癌症相关死亡的主要原因。我们用我们的模型在两个不同的数据集上进行了广泛的实验,在前列腺癌检测方面取得了最先进的性能,并改进了认识不确定性估计。该模型的实现可在 https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss 上获得。
{"title":"Cross-Slice Attention and Evidential Critical Loss for Uncertainty-Aware Prostate Cancer Detection.","authors":"Alex Ling Yu Hung, Haoxin Zheng, Kai Zhao, Kaifeng Pang, Demetri Terzopoulos, Kyunghyun Sung","doi":"10.1007/978-3-031-72111-3_11","DOIUrl":"10.1007/978-3-031-72111-3_11","url":null,"abstract":"<p><p>Current deep learning-based models typically analyze medical images in either 2D or 3D albeit disregarding volumetric information or suffering sub-optimal performance due to the anisotropic resolution of MR data. Furthermore, providing an accurate uncertainty estimation is beneficial to clinicians, as it indicates how confident a model is about its prediction. We propose a novel 2.5D cross-slice attention model that utilizes both global and local information, along with an evidential critical loss, to perform evidential deep learning for the detection in MR images of prostate cancer, one of the most common cancers and a leading cause of cancer-related death in men. We perform extensive experiments with our model on two different datasets and achieve state-of-the-art performance in prostate cancer detection along with improved epistemic uncertainty estimation. The implementation of the model is available at https://github.com/aL3x-O-o-Hung/GLCSA_ECLoss.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"113-123"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11646698/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142831545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse Modalities. MRIS:多模态图像合成的多模态检索方法。
Boqi Chen, Marc Niethammer

Multiple imaging modalities are often used for disease diagnosis, prediction, or population-based analyses. However, not all modalities might be available due to cost, different study designs, or changes in imaging technology. If the differences between the types of imaging are small, data harmonization approaches can be used; for larger changes, direct image synthesis approaches have been explored. In this paper, we develop an approach based on multi-modal metric learning to synthesize images of diverse modalities. We use metric learning via multi-modal image retrieval, resulting in embeddings that can relate images of different modalities. Given a large image database, the learned image embeddings allow us to use k-nearest neighbor (k-NN) regression for image synthesis. Our driving medical problem is knee osteoarthritis (KOA), but our developed method is general after proper image alignment. We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs. Our experiments show that the proposed method outperforms direct image synthesis and that the synthesized thickness maps retain information relevant to downstream tasks such as progression prediction and Kellgren-Lawrence grading (KLG). Our results suggest that retrieval approaches can be used to obtain high-quality and meaningful image synthesis results given large image databases.

多种成像模式通常用于疾病诊断、预测或基于人群的分析。然而,由于成本、研究设计不同或成像技术变化等原因,并非所有成像模式都可用。如果成像类型之间的差异较小,可以使用数据协调方法;如果差异较大,则可以探索直接图像合成方法。在本文中,我们开发了一种基于多模态度量学习的方法,用于合成不同模态的图像。我们通过多模态图像检索来进行度量学习,从而得到能将不同模态图像联系起来的嵌入。给定一个大型图像数据库,学习到的图像嵌入允许我们使用 k 近邻(k-NN)回归进行图像合成。我们要解决的医学问题是膝关节骨性关节炎(KOA),但我们开发的方法在适当的图像配准后具有通用性。我们通过使用二维射线照片合成从三维磁共振(MR)图像中获得的软骨厚度图来测试我们的方法。我们的实验表明,所提出的方法优于直接合成图像的方法,而且合成的厚度图保留了与进展预测和 Kellgren-Lawrence 分级(KLG)等下游任务相关的信息。我们的研究结果表明,在大型图像数据库中,检索方法可用于获得高质量和有意义的图像合成结果。
{"title":"MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse Modalities.","authors":"Boqi Chen, Marc Niethammer","doi":"10.1007/978-3-031-43999-5_26","DOIUrl":"10.1007/978-3-031-43999-5_26","url":null,"abstract":"<p><p>Multiple imaging modalities are often used for disease diagnosis, prediction, or population-based analyses. However, not all modalities might be available due to cost, different study designs, or changes in imaging technology. If the differences between the types of imaging are small, data harmonization approaches can be used; for larger changes, direct image synthesis approaches have been explored. In this paper, we develop an approach based on multi-modal metric learning to synthesize images of diverse modalities. We use metric learning via multi-modal image retrieval, resulting in embeddings that can relate images of different modalities. Given a large image database, the learned image embeddings allow us to use k-nearest neighbor (<i>k</i>-NN) regression for image synthesis. Our driving medical problem is knee osteoarthritis (KOA), but our developed method is general after proper image alignment. We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs. Our experiments show that the proposed method outperforms direct image synthesis and that the synthesized thickness maps retain information relevant to downstream tasks such as progression prediction and Kellgren-Lawrence grading (KLG). Our results suggest that retrieval approaches can be used to obtain high-quality and meaningful image synthesis results given large image databases.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"271-281"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Does Pruning Impact Long-Tailed Multi-label Medical Image Classifiers? 修剪如何影响长尾多标签医学图像分类器?
Gregory Holste, Ziyu Jiang, Ajay Jaiswal, Maria Hanna, Shlomo Minkowitz, Alan C Legasto, Joanna G Escalon, Sharon Steinberger, Mark Bittman, Thomas C Shen, Ying Ding, Ronald M Summers, George Shih, Yifan Peng, Zhangyang Wang

Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for long-tailed, multi-label datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class "forgettability" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.

修剪已成为一种强大的压缩深度神经网络的技术,可以在不显著影响整体性能的情况下减少内存使用和推理时间。然而,修剪影响模型行为的细微方式尚不清楚,尤其是对于临床环境中常见的长尾多标签数据集。当部署修剪模型进行诊断时,这种知识差距可能会产生危险的影响,因为意外的模型行为可能会影响患者的健康。为了填补这一空白,我们首次分析了修剪对经过训练的神经网络的影响,这些神经网络用于通过胸部X射线(CXR)诊断胸部疾病。在两个大型CXR数据集上,我们检查了哪些疾病受到修剪的影响最大,并基于疾病频率和共现行为来表征类“可遗忘性”。此外,我们确定了未压缩和大量修剪模型不一致的单个CXR,称为修剪已识别样本(PIE),并进行了人类读者研究,以评估其统一性。我们发现放射科医生认为PIE具有更多的标签噪声、更低的图像质量和更高的诊断难度。这项工作代表着理解修剪对深度长尾、多标签医学图像分类中模型行为的影响的第一步。所有代码、模型权重和数据访问指令都可以在https://github.com/VITA-Group/PruneCXR.
{"title":"How Does Pruning Impact Long-Tailed Multi-label Medical Image Classifiers?","authors":"Gregory Holste, Ziyu Jiang, Ajay Jaiswal, Maria Hanna, Shlomo Minkowitz, Alan C Legasto, Joanna G Escalon, Sharon Steinberger, Mark Bittman, Thomas C Shen, Ying Ding, Ronald M Summers, George Shih, Yifan Peng, Zhangyang Wang","doi":"10.1007/978-3-031-43904-9_64","DOIUrl":"10.1007/978-3-031-43904-9_64","url":null,"abstract":"<p><p>Pruning has emerged as a powerful technique for compressing deep neural networks, reducing memory usage and inference time without significantly affecting overall performance. However, the nuanced ways in which pruning impacts model behavior are not well understood, particularly for <i>long-tailed</i>, <i>multi-label</i> datasets commonly found in clinical settings. This knowledge gap could have dangerous implications when deploying a pruned model for diagnosis, where unexpected model behavior could impact patient well-being. To fill this gap, we perform the first analysis of pruning's effect on neural networks trained to diagnose thorax diseases from chest X-rays (CXRs). On two large CXR datasets, we examine which diseases are most affected by pruning and characterize class \"forgettability\" based on disease frequency and co-occurrence behavior. Further, we identify individual CXRs where uncompressed and heavily pruned models disagree, known as pruning-identified exemplars (PIEs), and conduct a human reader study to evaluate their unifying qualities. We find that radiologists perceive PIEs as having more label noise, lower image quality, and higher diagnosis difficulty. This work represents a first step toward understanding the impact of pruning on model behavior in deep long-tailed, multi-label medical image classification. All code, model weights, and data access instructions can be found at https://github.com/VITA-Group/PruneCXR.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14224 ","pages":"663-673"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10568970/pdf/nihms-1936096.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-shot Federated Learning on Medical Data using Knowledge Distillation with Image Synthesis and Client Model Adaptation. 利用知识蒸馏、图像合成和客户端模型适配对医疗数据进行一次性联合学习。
Myeongkyun Kang, Philip Chikontwe, Soopil Kim, Kyong Hwan Jin, Ehsan Adeli, Kilian M Pohl, Sang Hyun Park

One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Notably, as feature distributions in medical data are less discriminative than those of natural images, robust global model training with FL is non-trivial and can lead to overfitting. To address this issue, we propose a novel one-shot FL framework leveraging Image Synthesis and Client model Adaptation (FedISCA) with knowledge distillation (KD). To prevent overfitting, we generate diverse synthetic images ranging from random noise to realistic images. This approach (i) alleviates data privacy concerns and (ii) facilitates robust global model training using KD with decentralized client models. To mitigate domain disparity in the early stages of synthesis, we design noise-adapted client models where batch normalization statistics on random noise (synthetic images) are updated to enhance KD. Lastly, the global model is trained with both the original and noise-adapted client models via KD and synthetic images. This process is repeated till global model convergence. Extensive evaluation of this design on five small- and three large-scale medical image classification datasets reveals superior accuracy over prior methods. Code is available at https://github.com/myeongkyunkang/FedISCA.

在无法进行多轮通信的情况下,一次联合学习(FL)成为一种很有前途的解决方案。值得注意的是,由于医疗数据中的特征分布不如自然图像中的特征分布那么具有辨别性,因此使用联合学习进行稳健的全局模型训练并非易事,而且可能导致过拟合。为了解决这个问题,我们提出了一种新颖的单次 FL 框架,利用图像合成和客户端模型适配(FedISCA)与知识提炼(KD)。为了防止过拟合,我们生成了从随机噪音到真实图像的各种合成图像。这种方法(i)减轻了对数据隐私的担忧,(ii)有利于利用分散式客户端模型的知识蒸馏功能进行稳健的全局模型训练。为了在合成的早期阶段减轻领域差异,我们设计了适应噪声的客户端模型,对随机噪声(合成图像)进行批量归一化统计更新,以增强 KD。最后,通过 KD 和合成图像,使用原始和噪声适配客户端模型训练全局模型。这一过程不断重复,直到全局模型收敛。在五个小型和三个大型医学图像分类数据集上对这一设计进行的广泛评估显示,其准确性优于之前的方法。代码见 https://github.com/myeongkyunkang/FedISCA。
{"title":"One-shot Federated Learning on Medical Data using Knowledge Distillation with Image Synthesis and Client Model Adaptation.","authors":"Myeongkyun Kang, Philip Chikontwe, Soopil Kim, Kyong Hwan Jin, Ehsan Adeli, Kilian M Pohl, Sang Hyun Park","doi":"10.1007/978-3-031-43895-0_49","DOIUrl":"10.1007/978-3-031-43895-0_49","url":null,"abstract":"<p><p>One-shot federated learning (FL) has emerged as a promising solution in scenarios where multiple communication rounds are not practical. Notably, as feature distributions in medical data are less discriminative than those of natural images, robust global model training with FL is non-trivial and can lead to overfitting. To address this issue, we propose a novel one-shot FL framework leveraging Image Synthesis and Client model Adaptation (FedISCA) with knowledge distillation (KD). To prevent overfitting, we generate diverse synthetic images ranging from random noise to realistic images. This approach (i) alleviates data privacy concerns and (ii) facilitates robust global model training using KD with decentralized client models. To mitigate domain disparity in the early stages of synthesis, we design noise-adapted client models where batch normalization statistics on random noise (synthetic images) are updated to enhance KD. Lastly, the global model is trained with both the original and noise-adapted client models via KD and synthetic images. This process is repeated till global model convergence. Extensive evaluation of this design on five small- and three large-scale medical image classification datasets reveals superior accuracy over prior methods. Code is available at https://github.com/myeongkyunkang/FedISCA.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14221 ","pages":"521-531"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10781197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139418907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laplacian-Former: Overcoming the Limitations of Vision Transformers in Local Texture Detection. 拉普拉斯变换器克服视觉变换器在局部纹理检测中的局限性
Reza Azad, Amirhossein Kazerouni, Babak Azad, Ehsan Khodapanah Aghdam, Yury Velichko, Ulas Bagci, Dorit Merhof

Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub.

视觉变换器(ViT)模型在广泛的计算机视觉任务中取得了突破性进展。然而,与卷积神经网络(CNN)模型相比,人们发现 ViT 模型很难捕捉到图像的高频成分,从而限制了其检测局部纹理和边缘信息的能力。由于肿瘤和病变等人体组织异常在结构、纹理和形状上可能存在很大差异,因此纹理等高频信息对于有效的语义分割任务至关重要。为了解决 ViT 模型中的这一局限性,我们提出了一种新技术--拉普拉斯矩阵(Laplacian-Former),该技术通过自适应地重新校准拉普拉斯金字塔中的频率信息来增强自我关注图。更具体地说,我们提出的方法通过高效注意力和频率注意力利用了双重注意力机制,而高效注意力机制在产生相同输出的同时将自我注意力的复杂性降低为线性,选择性地强化了形状和纹理特征的贡献。此外,我们还引入了一种新颖的高效增强多尺度桥,可有效地将空间信息从编码器传输到解码器,同时保留基本特征。我们证明了拉普拉斯公式在多器官和皮肤病变分割任务中的功效,与 SOTA 方法相比,骰子得分分别提高了 +1.87% 和 +0.76%。我们的实现可在 GitHub 上公开获取。
{"title":"Laplacian-Former: Overcoming the Limitations of Vision Transformers in Local Texture Detection.","authors":"Reza Azad, Amirhossein Kazerouni, Babak Azad, Ehsan Khodapanah Aghdam, Yury Velichko, Ulas Bagci, Dorit Merhof","doi":"10.1007/978-3-031-43898-1_70","DOIUrl":"10.1007/978-3-031-43898-1_70","url":null,"abstract":"<p><p>Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14222 ","pages":"736-746"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10830169/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139652500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cochlear Implant Fold Detection in Intra-operative CT Using Weakly Supervised Multi-task Deep Learning. 利用弱监督多任务深度学习在术中 CT 中检测人工耳蜗褶皱
Mohammad M R Khan, Yubo Fan, Benoit M Dawant, Jack H Noble

In cochlear implant (CI) procedures, an electrode array is surgically inserted into the cochlea. The electrodes are used to stimulate the auditory nerve and restore hearing sensation for the recipient. If the array folds inside the cochlea during the insertion procedure, it can lead to trauma, damage to the residual hearing, and poor hearing restoration. Intraoperative detection of such a case can allow a surgeon to perform reimplantation. However, this intraoperative detection requires experience and electrophysiological tests sometimes fail to detect an array folding. Due to the low incidence of array folding, we generated a dataset of CT images with folded synthetic electrode arrays with realistic metal artifact. The dataset was used to train a multitask custom 3D-UNet model for array fold detection. We tested the trained model on real post-operative CTs (7 with folded arrays and 200 without). Our model could correctly classify all the fold-over cases while misclassifying only 3 non fold-over cases. Therefore, the model is a promising option for array fold detection.

在人工耳蜗植入(CI)手术中,通过手术将电极阵列植入耳蜗。电极用于刺激听觉神经,恢复受术者的听觉。如果电极阵列在插入过程中折叠在耳蜗内,可能会导致创伤、残余听力受损和听力恢复不良。术中发现这种情况后,外科医生就可以进行再植入手术。然而,术中检测需要经验,而且电生理测试有时也无法检测到阵列折叠。由于阵列折叠的发生率较低,我们生成了一个带有折叠合成电极阵列和真实金属伪影的 CT 图像数据集。该数据集用于训练多任务定制 3D-UNet 模型,以检测阵列折叠。我们在真实的术后 CT 图像(7 幅有折叠阵列,200 幅无折叠阵列)上测试了训练好的模型。我们的模型可以正确分类所有折叠病例,而仅误分了 3 个非折叠病例。因此,该模型在阵列折叠检测方面大有可为。
{"title":"Cochlear Implant Fold Detection in Intra-operative CT Using Weakly Supervised Multi-task Deep Learning.","authors":"Mohammad M R Khan, Yubo Fan, Benoit M Dawant, Jack H Noble","doi":"10.1007/978-3-031-43996-4_24","DOIUrl":"10.1007/978-3-031-43996-4_24","url":null,"abstract":"<p><p>In cochlear implant (CI) procedures, an electrode array is surgically inserted into the cochlea. The electrodes are used to stimulate the auditory nerve and restore hearing sensation for the recipient. If the array folds inside the cochlea during the insertion procedure, it can lead to trauma, damage to the residual hearing, and poor hearing restoration. Intraoperative detection of such a case can allow a surgeon to perform reimplantation. However, this intraoperative detection requires experience and electrophysiological tests sometimes fail to detect an array folding. Due to the low incidence of array folding, we generated a dataset of CT images with folded synthetic electrode arrays with realistic metal artifact. The dataset was used to train a multitask custom 3D-UNet model for array fold detection. We tested the trained model on real post-operative CTs (7 with folded arrays and 200 without). Our model could correctly classify all the fold-over cases while misclassifying only 3 non fold-over cases. Therefore, the model is a promising option for array fold detection.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"249-259"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10953791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140186822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Reconstruction for Deep Learning PET Head Motion Correction. 深度学习 PET 头部运动校正的快速重建。
Tianyi Zeng, Jiazhen Zhang, Eléonore V Lieffrig, Zhuotong Cai, Fuyao Chen, Chenyu You, Mika Naganawa, Yihuan Lu, John A Onofrey

Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an 18F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.

头部运动校正是脑 PET 成像的重要组成部分,在这种成像中,即使是幅度很小的运动也会大大降低图像质量并引入伪影。在以往工作的基础上,我们提出了一种新的头部运动校正框架,将快速重建作为输入。该方法的主要特点是(i) 采用高分辨率短帧快速重建工作流程;(ii) 开发用于 PET 数据表示提取的新型编码器;(iii) 实施数据增强技术。进行消融研究以评估这些设计选择各自的贡献。此外,我们还对 18F-FPEB 数据集进行了多受试者研究,并通过 MOLAR 重建研究和相应的大脑感兴趣区(ROI)标准摄取值(SUV)评估,对该方法的性能进行了定性和定量评估。此外,我们还将该方法与传统的基于强度的配准方法进行了比较。结果表明,在所有受试者身上,我们提出的方法都优于其他方法,并能准确估计训练集以外受试者的运动。所有代码均可在 GitHub 上公开获取:https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023。
{"title":"Fast Reconstruction for Deep Learning PET Head Motion Correction.","authors":"Tianyi Zeng, Jiazhen Zhang, Eléonore V Lieffrig, Zhuotong Cai, Fuyao Chen, Chenyu You, Mika Naganawa, Yihuan Lu, John A Onofrey","doi":"10.1007/978-3-031-43999-5_67","DOIUrl":"10.1007/978-3-031-43999-5_67","url":null,"abstract":"<p><p>Head motion correction is an essential component of brain PET imaging, in which even motion of small magnitude can greatly degrade image quality and introduce artifacts. Building upon previous work, we propose a new head motion correction framework taking fast reconstructions as input. The main characteristics of the proposed method are: (i) the adoption of a high-resolution short-frame fast reconstruction workflow; (ii) the development of a novel encoder for PET data representation extraction; and (iii) the implementation of data augmentation techniques. Ablation studies are conducted to assess the individual contributions of each of these design choices. Furthermore, multi-subject studies are conducted on an <sup>18</sup>F-FPEB dataset, and the method performance is qualitatively and quantitatively evaluated by MOLAR reconstruction study and corresponding brain Region of Interest (ROI) Standard Uptake Values (SUV) evaluation. Additionally, we also compared our method with a conventional intensity-based registration method. Our results demonstrate that the proposed method outperforms other methods on all subjects, and can accurately estimate motion for subjects out of the training set. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_fast_recon_miccai2023.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14229 ","pages":"710-719"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10758999/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139089835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Deep-Learning-Based Framework for Cochlear Implant Electrode Array Localization. 基于深度学习的人工耳蜗植入电极阵列定位统一框架。
Yubo Fan, Jianing Wang, Yiyuan Zhao, Rui Li, Han Liu, Robert F Labadie, Jack H Noble, Benoit M Dawant

Cochlear implants (CIs) are neuroprosthetics that can provide a sense of sound to people with severe-to-profound hearing loss. A CI contains an electrode array (EA) that is threaded into the cochlea during surgery. Recent studies have shown that hearing outcomes are correlated with EA placement. An image-guided cochlear implant programming technique is based on this correlation and utilizes the EA location with respect to the intracochlear anatomy to help audiologists adjust the CI settings to improve hearing. Automated methods to localize EA in postoperative CT images are of great interest for large-scale studies and for translation into the clinical workflow. In this work, we propose a unified deep-learning-based framework for automated EA localization. It consists of a multi-task network and a series of postprocessing algorithms to localize various types of EAs. The evaluation on a dataset with 27 cadaveric samples shows that its localization error is slightly smaller than the state-of-the-art method. Another evaluation on a large-scale clinical dataset containing 561 cases across two institutions demonstrates a significant improvement in robustness compared to the state-of-the-art method. This suggests that this technique could be integrated into the clinical workflow and provide audiologists with information that facilitates the programming of the implant leading to improved patient care.

人工耳蜗(CI)是一种神经义肢,可以为重度到永久性听力损失患者提供声音感知。CI 包含一个电极阵列 (EA),在手术中被穿入耳蜗。最近的研究表明,听力效果与电极阵列的位置有关。图像引导人工耳蜗植入编程技术就是基于这种相关性,并利用 EA 位置与耳蜗内解剖结构的关系,帮助听力学家调整 CI 设置以改善听力。在术后 CT 图像中定位 EA 的自动化方法对于大规模研究和转化为临床工作流程具有重大意义。在这项工作中,我们提出了一种基于深度学习的统一框架,用于自动 EA 定位。它由一个多任务网络和一系列后处理算法组成,用于定位各种类型的 EA。在一个包含 27 个尸体样本的数据集上进行的评估表明,其定位误差略小于最先进的方法。另一项评估是在一个大规模临床数据集上进行的,该数据集包含两个机构的 561 个病例,结果表明与最先进的方法相比,该方法的鲁棒性有了显著提高。这表明这项技术可以整合到临床工作流程中,为听力学家提供有助于植入程序设计的信息,从而改善患者护理。
{"title":"A Unified Deep-Learning-Based Framework for Cochlear Implant Electrode Array Localization.","authors":"Yubo Fan, Jianing Wang, Yiyuan Zhao, Rui Li, Han Liu, Robert F Labadie, Jack H Noble, Benoit M Dawant","doi":"10.1007/978-3-031-43996-4_36","DOIUrl":"10.1007/978-3-031-43996-4_36","url":null,"abstract":"<p><p>Cochlear implants (CIs) are neuroprosthetics that can provide a sense of sound to people with severe-to-profound hearing loss. A CI contains an electrode array (EA) that is threaded into the cochlea during surgery. Recent studies have shown that hearing outcomes are correlated with EA placement. An image-guided cochlear implant programming technique is based on this correlation and utilizes the EA location with respect to the intracochlear anatomy to help audiologists adjust the CI settings to improve hearing. Automated methods to localize EA in postoperative CT images are of great interest for large-scale studies and for translation into the clinical workflow. In this work, we propose a unified deep-learning-based framework for automated EA localization. It consists of a multi-task network and a series of postprocessing algorithms to localize various types of EAs. The evaluation on a dataset with 27 cadaveric samples shows that its localization error is slightly smaller than the state-of-the-art method. Another evaluation on a large-scale clinical dataset containing 561 cases across two institutions demonstrates a significant improvement in robustness compared to the state-of-the-art method. This suggests that this technique could be integrated into the clinical workflow and provide audiologists with information that facilitates the programming of the implant leading to improved patient care.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"14228 ","pages":"376-385"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10976972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140338426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepSOZ: A Robust Deep Model for Joint Temporal and Spatial Seizure Onset Localization from Multichannel EEG Data. DeepSOZ:从多通道脑电图数据进行癫痫发作时间和空间联合定位的鲁棒深度模型。
Deeksha M Shama, Jiasen Jing, Archana Venkataraman

We propose a robust deep learning framework to simultaneously detect and localize seizure activity from multichannel scalp EEG. Our model, called DeepSOZ, consists of a transformer encoder to generate global and channel-wise encodings. The global branch is combined with an LSTM for temporal seizure detection. In parallel, we employ attention-weighted multi-instance pooling of channel-wise encodings to predict the seizure onset zone. DeepSOZ is trained in a supervised fashion and generates high-resolution predictions on the order of each second (temporal) and EEG channel (spatial). We validate DeepSOZ via bootstrapped nested cross-validation on a large dataset of 120 patients curated from the Temple University Hospital corpus. As compared to baseline approaches, DeepSOZ provides robust overall performance in our multi-task learning setup. We also evaluate the intra-seizure and intra-patient consistency of DeepSOZ as a first step to establishing its trustworthiness for integration into the clinical workflow for epilepsy.

我们提出了一种稳健的深度学习框架,可同时检测和定位多通道头皮脑电图中的癫痫发作活动。我们的模型被称为 DeepSOZ,由变压器编码器组成,用于生成全局编码和信道编码。全局分支与 LSTM 相结合,用于颞叶癫痫发作检测。与此同时,我们采用注意力加权多实例通道编码池来预测癫痫发作区。DeepSOZ 采用有监督的方式进行训练,并按每秒(时间)和脑电图通道(空间)的顺序生成高分辨率预测。我们在天普大学医院语料库中收集的 120 名患者的大型数据集上通过引导嵌套交叉验证验证了 DeepSOZ。与基线方法相比,DeepSOZ 在我们的多任务学习设置中提供了强大的整体性能。我们还评估了 DeepSOZ 在发作内和患者内的一致性,以此作为建立其可信度的第一步,以便将其整合到癫痫临床工作流程中。
{"title":"DeepSOZ: A Robust Deep Model for Joint Temporal and Spatial Seizure Onset Localization from Multichannel EEG Data.","authors":"Deeksha M Shama, Jiasen Jing, Archana Venkataraman","doi":"10.1007/978-3-031-43993-3_18","DOIUrl":"https://doi.org/10.1007/978-3-031-43993-3_18","url":null,"abstract":"<p><p>We propose a robust deep learning framework to simultaneously detect and localize seizure activity from multichannel scalp EEG. Our model, called DeepSOZ, consists of a transformer encoder to generate global and channel-wise encodings. The global branch is combined with an LSTM for temporal seizure detection. In parallel, we employ attention-weighted multi-instance pooling of channel-wise encodings to predict the seizure onset zone. DeepSOZ is trained in a supervised fashion and generates high-resolution predictions on the order of each second (temporal) and EEG channel (spatial). We validate DeepSOZ via bootstrapped nested cross-validation on a large dataset of 120 patients curated from the Temple University Hospital corpus. As compared to baseline approaches, DeepSOZ provides robust overall performance in our multi-task learning setup. We also evaluate the intra-seizure and intra-patient consistency of DeepSOZ as a first step to establishing its trustworthiness for integration into the clinical workflow for epilepsy.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"2023 ","pages":"184-194"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11545985/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142635479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1