首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion Segmentation SLPT:选择性标记满足对标记有限病变分割的及时调整
Fan Bai, K. Yan, Xiaoyu Bai, Xinyu Mao, Xiaoli Yin, Jingren Zhou, Yu Shi, Le Lu, Max Q.-H. Meng
Medical image analysis using deep learning is often challenged by limited labeled data and high annotation costs. Fine-tuning the entire network in label-limited scenarios can lead to overfitting and suboptimal performance. Recently, prompt tuning has emerged as a more promising technique that introduces a few additional tunable parameters as prompts to a task-agnostic pre-trained model, and updates only these parameters using supervision from limited labeled data while keeping the pre-trained model unchanged. However, previous work has overlooked the importance of selective labeling in downstream tasks, which aims to select the most valuable downstream samples for annotation to achieve the best performance with minimum annotation cost. To address this, we propose a framework that combines selective labeling with prompt tuning (SLPT) to boost performance in limited labels. Specifically, we introduce a feature-aware prompt updater to guide prompt tuning and a TandEm Selective LAbeling (TESLA) strategy. TESLA includes unsupervised diversity selection and supervised selection using prompt-based uncertainty. In addition, we propose a diversified visual prompt tuning strategy to provide multi-prompt-based discrepant predictions for TESLA. We evaluate our method on liver tumor segmentation and achieve state-of-the-art performance, outperforming traditional fine-tuning with only 6% of tunable parameters, also achieving 94% of full-data performance by labeling only 5% of the data.
使用深度学习的医学图像分析经常受到标记数据有限和注释成本高的挑战。在标签有限的情况下对整个网络进行微调可能会导致过拟合和次优性能。最近,提示调优已经成为一种更有前途的技术,它将一些额外的可调参数作为提示引入到与任务无关的预训练模型中,并在保持预训练模型不变的同时,使用有限标记数据的监督只更新这些参数。然而,以往的工作忽略了选择性标注在下游任务中的重要性,其目的是选择最有价值的下游样本进行标注,以最小的标注成本获得最佳的性能。为了解决这个问题,我们提出了一个框架,将选择性标记与提示调优(SLPT)相结合,以提高有限标签下的性能。具体来说,我们引入了一个特征感知提示更新器来指导提示调谐和串联选择性标记(TESLA)策略。特斯拉包括无监督多样性选择和基于提示的不确定性的监督选择。此外,我们提出了一种多样化的视觉提示调整策略,为特斯拉提供基于多提示的差异预测。我们评估了我们在肝脏肿瘤分割方面的方法,并取得了最先进的性能,仅使用6%的可调参数就优于传统的微调,并且仅通过标记5%的数据就获得了94%的全数据性能。
{"title":"SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion Segmentation","authors":"Fan Bai, K. Yan, Xiaoyu Bai, Xinyu Mao, Xiaoli Yin, Jingren Zhou, Yu Shi, Le Lu, Max Q.-H. Meng","doi":"10.48550/arXiv.2308.04911","DOIUrl":"https://doi.org/10.48550/arXiv.2308.04911","url":null,"abstract":"Medical image analysis using deep learning is often challenged by limited labeled data and high annotation costs. Fine-tuning the entire network in label-limited scenarios can lead to overfitting and suboptimal performance. Recently, prompt tuning has emerged as a more promising technique that introduces a few additional tunable parameters as prompts to a task-agnostic pre-trained model, and updates only these parameters using supervision from limited labeled data while keeping the pre-trained model unchanged. However, previous work has overlooked the importance of selective labeling in downstream tasks, which aims to select the most valuable downstream samples for annotation to achieve the best performance with minimum annotation cost. To address this, we propose a framework that combines selective labeling with prompt tuning (SLPT) to boost performance in limited labels. Specifically, we introduce a feature-aware prompt updater to guide prompt tuning and a TandEm Selective LAbeling (TESLA) strategy. TESLA includes unsupervised diversity selection and supervised selection using prompt-based uncertainty. In addition, we propose a diversified visual prompt tuning strategy to provide multi-prompt-based discrepant predictions for TESLA. We evaluate our method on liver tumor segmentation and achieve state-of-the-art performance, outperforming traditional fine-tuning with only 6% of tunable parameters, also achieving 94% of full-data performance by labeling only 5% of the data.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"42 1","pages":"14-24"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76380612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Multi-Shot Diffusion-Weighted MRI with Zero-Shot Self-Supervised Learning Reconstruction 基于零弹自监督学习重建的多弹扩散加权MRI改进
Jaejin Cho, Yohan Jun, Xiaoqing Wang, Caique Kobayashi, B. Bilgiç
Diffusion MRI is commonly performed using echo-planar imaging (EPI) due to its rapid acquisition time. However, the resolution of diffusion-weighted images is often limited by magnetic field inhomogeneity-related artifacts and blurring induced by T2- and T2*-relaxation effects. To address these limitations, multi-shot EPI (msEPI) combined with parallel imaging techniques is frequently employed. Nevertheless, reconstructing msEPI can be challenging due to phase variation between multiple shots. In this study, we introduce a novel msEPI reconstruction approach called zero-MIRID (zero-shot self-supervised learning of Multi-shot Image Reconstruction for Improved Diffusion MRI). This method jointly reconstructs msEPI data by incorporating deep learning-based image regularization techniques. The network incorporates CNN denoisers in both k- and image-spaces, while leveraging virtual coils to enhance image reconstruction conditioning. By employing a self-supervised learning technique and dividing sampled data into three groups, the proposed approach achieves superior results compared to the state-of-the-art parallel imaging method, as demonstrated in an in-vivo experiment.
扩散MRI通常使用回波平面成像(EPI)进行,因为它的采集时间快。然而,扩散加权图像的分辨率经常受到与磁场不均匀性相关的伪影和由T2-和T2*-弛豫效应引起的模糊的限制。为了解决这些限制,经常采用多镜头EPI (msEPI)结合并行成像技术。然而,由于多次拍摄之间的相位变化,重建msEPI可能具有挑战性。在本研究中,我们引入了一种新的msEPI重建方法,称为zero-MIRID (zero-shot self-supervised learning of Multi-shot Image reconstruction for Improved Diffusion MRI)。该方法结合基于深度学习的图像正则化技术,对msEPI数据进行联合重构。该网络在k空间和图像空间中结合了CNN去噪器,同时利用虚拟线圈增强图像重建条件。通过采用自监督学习技术并将采样数据分为三组,与最先进的并行成像方法相比,所提出的方法取得了更好的结果,这在体内实验中得到了证明。
{"title":"Improved Multi-Shot Diffusion-Weighted MRI with Zero-Shot Self-Supervised Learning Reconstruction","authors":"Jaejin Cho, Yohan Jun, Xiaoqing Wang, Caique Kobayashi, B. Bilgiç","doi":"10.48550/arXiv.2308.05103","DOIUrl":"https://doi.org/10.48550/arXiv.2308.05103","url":null,"abstract":"Diffusion MRI is commonly performed using echo-planar imaging (EPI) due to its rapid acquisition time. However, the resolution of diffusion-weighted images is often limited by magnetic field inhomogeneity-related artifacts and blurring induced by T2- and T2*-relaxation effects. To address these limitations, multi-shot EPI (msEPI) combined with parallel imaging techniques is frequently employed. Nevertheless, reconstructing msEPI can be challenging due to phase variation between multiple shots. In this study, we introduce a novel msEPI reconstruction approach called zero-MIRID (zero-shot self-supervised learning of Multi-shot Image Reconstruction for Improved Diffusion MRI). This method jointly reconstructs msEPI data by incorporating deep learning-based image regularization techniques. The network incorporates CNN denoisers in both k- and image-spaces, while leveraging virtual coils to enhance image reconstruction conditioning. By employing a self-supervised learning technique and dividing sampled data into three groups, the proposed approach achieves superior results compared to the state-of-the-art parallel imaging method, as demonstrated in an in-vivo experiment.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"12 1","pages":"457-466"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88631117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Synthetic Augmentation with Large-scale Unconditional Pre-training 大规模无条件预训练的综合增强
Jiarong Ye, Haomiao Ni, Peng Jin, Sharon X. Huang, Yuan Xue
Deep learning based medical image recognition systems often require a substantial amount of training data with expert annotations, which can be expensive and time-consuming to obtain. Recently, synthetic augmentation techniques have been proposed to mitigate the issue by generating realistic images conditioned on class labels. However, the effectiveness of these methods heavily depends on the representation capability of the trained generative model, which cannot be guaranteed without sufficient labeled training data. To further reduce the dependency on annotated data, we propose a synthetic augmentation method called HistoDiffusion, which can be pre-trained on large-scale unlabeled datasets and later applied to a small-scale labeled dataset for augmented training. In particular, we train a latent diffusion model (LDM) on diverse unlabeled datasets to learn common features and generate realistic images without conditional inputs. Then, we fine-tune the model with classifier guidance in latent space on an unseen labeled dataset so that the model can synthesize images of specific categories. Additionally, we adopt a selective mechanism to only add synthetic samples with high confidence of matching to target labels. We evaluate our proposed method by pre-training on three histopathology datasets and testing on a histopathology dataset of colorectal cancer (CRC) excluded from the pre-training datasets. With HistoDiffusion augmentation, the classification accuracy of a backbone classifier is remarkably improved by 6.4% using a small set of the original labels. Our code is available at https://github.com/karenyyy/HistoDiffAug.
基于深度学习的医学图像识别系统通常需要大量带有专家注释的训练数据,而获得这些数据既昂贵又耗时。最近,人们提出了合成增强技术,通过生成基于类标签的逼真图像来缓解这一问题。然而,这些方法的有效性在很大程度上取决于训练生成模型的表示能力,没有足够的标记训练数据就无法保证生成模型的表示能力。为了进一步减少对标注数据的依赖,我们提出了一种称为HistoDiffusion的合成增强方法,该方法可以在大规模未标记数据集上进行预训练,然后应用于小规模标记数据集进行增强训练。特别是,我们在不同的未标记数据集上训练一个潜在扩散模型(LDM)来学习共同特征并在没有条件输入的情况下生成逼真的图像。然后,我们在隐空间中使用分类器引导对模型进行微调,使模型能够合成特定类别的图像。此外,我们采用选择性机制,只添加具有高置信度匹配的合成样品到目标标签。我们通过在三个组织病理学数据集上进行预训练,并在排除在预训练数据集之外的结直肠癌(CRC)组织病理学数据集上进行测试,来评估我们提出的方法。使用HistoDiffusion增强技术,使用一小部分原始标签,骨干分类器的分类准确率显著提高了6.4%。我们的代码可在https://github.com/karenyyy/HistoDiffAug上获得。
{"title":"Synthetic Augmentation with Large-scale Unconditional Pre-training","authors":"Jiarong Ye, Haomiao Ni, Peng Jin, Sharon X. Huang, Yuan Xue","doi":"10.48550/arXiv.2308.04020","DOIUrl":"https://doi.org/10.48550/arXiv.2308.04020","url":null,"abstract":"Deep learning based medical image recognition systems often require a substantial amount of training data with expert annotations, which can be expensive and time-consuming to obtain. Recently, synthetic augmentation techniques have been proposed to mitigate the issue by generating realistic images conditioned on class labels. However, the effectiveness of these methods heavily depends on the representation capability of the trained generative model, which cannot be guaranteed without sufficient labeled training data. To further reduce the dependency on annotated data, we propose a synthetic augmentation method called HistoDiffusion, which can be pre-trained on large-scale unlabeled datasets and later applied to a small-scale labeled dataset for augmented training. In particular, we train a latent diffusion model (LDM) on diverse unlabeled datasets to learn common features and generate realistic images without conditional inputs. Then, we fine-tune the model with classifier guidance in latent space on an unseen labeled dataset so that the model can synthesize images of specific categories. Additionally, we adopt a selective mechanism to only add synthetic samples with high confidence of matching to target labels. We evaluate our proposed method by pre-training on three histopathology datasets and testing on a histopathology dataset of colorectal cancer (CRC) excluded from the pre-training datasets. With HistoDiffusion augmentation, the classification accuracy of a backbone classifier is remarkably improved by 6.4% using a small set of the original labels. Our code is available at https://github.com/karenyyy/HistoDiffAug.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"82 1","pages":"754-764"},"PeriodicalIF":0.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73314670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network 使用混合多任务CNN-Transformer网络的乳腺超声肿瘤分类
Bryar Shareef, Min Xian, Aleksandar Vakanski, Haotian Wang
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.
全局上下文信息的获取在乳腺超声图像分类中起着至关重要的作用。尽管卷积神经网络(cnn)在肿瘤分类中表现出可靠的性能,但由于卷积操作的局域性,它们在建模全局和远程依赖关系方面存在固有的局限性。视觉变换具有更好的捕获全局上下文信息的能力,但由于标记化操作可能会扭曲局部图像模式。在这项研究中,我们提出了一个名为hybrid - mt - estan的混合多任务深度神经网络,旨在使用由cnn和Swin Transformer组件组成的混合架构进行BUS肿瘤分类和分割。将该方法与9种BUS分类方法进行比较,并在3320个BUS图像数据集上使用7个定量指标进行评估。结果表明,Hybrid-MT-ESTAN的准确率、灵敏度和F1评分最高,分别为82.7%、86.4%和86.0%。
{"title":"Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network","authors":"Bryar Shareef, Min Xian, Aleksandar Vakanski, Haotian Wang","doi":"10.48550/arXiv.2308.02101","DOIUrl":"https://doi.org/10.48550/arXiv.2308.02101","url":null,"abstract":"Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"19 1","pages":"344-353"},"PeriodicalIF":0.0,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83680948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesising Rare Cataract Surgery Samples with Guided Diffusion Models 利用引导扩散模型合成罕见白内障手术样本
Yannik Frisch, Moritz Fuchs, Antoine Pierre Sanner, F. A. Ucar, Marius Frenzel, Joana Wasielica-Poslednik, A. Gericke, F. Wagner, Thomas Dratsch, A. Mukhopadhyay
Cataract surgery is a frequently performed procedure that demands automation and advanced assistance systems. However, gathering and annotating data for training such systems is resource intensive. The publicly available data also comprises severe imbalances inherent to the surgical process. Motivated by this, we analyse cataract surgery video data for the worst-performing phases of a pre-trained downstream tool classifier. The analysis demonstrates that imbalances deteriorate the classifier's performance on underrepresented cases. To address this challenge, we utilise a conditional generative model based on Denoising Diffusion Implicit Models (DDIM) and Classifier-Free Guidance (CFG). Our model can synthesise diverse, high-quality examples based on complex multi-class multi-label conditions, such as surgical phases and combinations of surgical tools. We affirm that the synthesised samples display tools that the classifier recognises. These samples are hard to differentiate from real images, even for clinical experts with more than five years of experience. Further, our synthetically extended data can improve the data sparsity problem for the downstream task of tool classification. The evaluations demonstrate that the model can generate valuable unseen examples, allowing the tool classifier to improve by up to 10% for rare cases. Overall, our approach can facilitate the development of automated assistance systems for cataract surgery by providing a reliable source of realistic synthetic data, which we make available for everyone.
白内障手术是一种经常进行的手术,需要自动化和先进的辅助系统。然而,为训练这样的系统收集和注释数据是资源密集型的。公开可用的数据还包括手术过程中固有的严重不平衡。基于此,我们分析了白内障手术视频数据中预训练的下游工具分类器中表现最差的阶段。分析表明,不平衡会使分类器在代表性不足的情况下的性能下降。为了解决这一挑战,我们利用了基于去噪扩散隐式模型(DDIM)和无分类器指导(CFG)的条件生成模型。我们的模型可以基于复杂的多类别多标签条件(如手术阶段和手术工具的组合)合成各种高质量的示例。我们确认合成的样本显示了分类器识别的工具。这些样本很难与真实图像区分开来,即使是有五年以上经验的临床专家。此外,我们的综合扩展数据可以改善下游工具分类任务的数据稀疏性问题。评估表明,该模型可以生成有价值的未见过的示例,允许工具分类器在罕见情况下提高高达10%。总的来说,我们的方法可以通过提供可靠的真实合成数据来源来促进白内障手术自动辅助系统的发展,我们让每个人都可以使用。
{"title":"Synthesising Rare Cataract Surgery Samples with Guided Diffusion Models","authors":"Yannik Frisch, Moritz Fuchs, Antoine Pierre Sanner, F. A. Ucar, Marius Frenzel, Joana Wasielica-Poslednik, A. Gericke, F. Wagner, Thomas Dratsch, A. Mukhopadhyay","doi":"10.48550/arXiv.2308.02587","DOIUrl":"https://doi.org/10.48550/arXiv.2308.02587","url":null,"abstract":"Cataract surgery is a frequently performed procedure that demands automation and advanced assistance systems. However, gathering and annotating data for training such systems is resource intensive. The publicly available data also comprises severe imbalances inherent to the surgical process. Motivated by this, we analyse cataract surgery video data for the worst-performing phases of a pre-trained downstream tool classifier. The analysis demonstrates that imbalances deteriorate the classifier's performance on underrepresented cases. To address this challenge, we utilise a conditional generative model based on Denoising Diffusion Implicit Models (DDIM) and Classifier-Free Guidance (CFG). Our model can synthesise diverse, high-quality examples based on complex multi-class multi-label conditions, such as surgical phases and combinations of surgical tools. We affirm that the synthesised samples display tools that the classifier recognises. These samples are hard to differentiate from real images, even for clinical experts with more than five years of experience. Further, our synthetically extended data can improve the data sparsity problem for the downstream task of tool classification. The evaluations demonstrate that the model can generate valuable unseen examples, allowing the tool classifier to improve by up to 10% for rare cases. Overall, our approach can facilitate the development of automated assistance systems for cataract surgery by providing a reliable source of realistic synthetic data, which we make available for everyone.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"305 1","pages":"354-364"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77114217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L3DMC: Lifelong Learning using Distillation via Mixed-Curvature Space L3DMC:基于混合曲率空间的终身学习方法
Kaushik Roy, Peyman Moghadam, Mehrtash Harandi
The performance of a lifelong learning (L3) model degrades when it is trained on a series of tasks, as the geometrical formation of the embedding space changes while learning novel concepts sequentially. The majority of existing L3 approaches operate on a fixed-curvature (e.g., zero-curvature Euclidean) space that is not necessarily suitable for modeling the complex geometric structure of data. Furthermore, the distillation strategies apply constraints directly on low-dimensional embeddings, discouraging the L3 model from learning new concepts by making the model highly stable. To address the problem, we propose a distillation strategy named L3DMC that operates on mixed-curvature spaces to preserve the already-learned knowledge by modeling and maintaining complex geometrical structures. We propose to embed the projected low dimensional embedding of fixed-curvature spaces (Euclidean and hyperbolic) to higher-dimensional Reproducing Kernel Hilbert Space (RKHS) using a positive-definite kernel function to attain rich representation. Afterward, we optimize the L3 model by minimizing the discrepancies between the new sample representation and the subspace constructed using the old representation in RKHS. L3DMC is capable of adapting new knowledge better without forgetting old knowledge as it combines the representation power of multiple fixed-curvature spaces and is performed on higher-dimensional RKHS. Thorough experiments on three benchmarks demonstrate the effectiveness of our proposed distillation strategy for medical image classification in L3 settings. Our code implementation is publicly available at https://github.com/csiro-robotics/L3DMC.
当终身学习(L3)模型在一系列任务中训练时,其性能会下降,因为在顺序学习新概念时,嵌入空间的几何形状会发生变化。大多数现有的L3方法在固定曲率(例如,零曲率欧几里得)空间上操作,不一定适合对数据的复杂几何结构进行建模。此外,蒸馏策略直接在低维嵌入上应用约束,通过使模型高度稳定来阻止L3模型学习新概念。为了解决这个问题,我们提出了一种名为L3DMC的蒸馏策略,该策略在混合曲率空间上操作,通过建模和维护复杂的几何结构来保留已经学习的知识。我们提出利用正定核函数将固定曲率空间(欧几里德空间和双曲空间)的投影低维嵌入到高维再现核希尔伯特空间(RKHS)中以获得丰富的表示。之后,我们通过最小化新样本表示与RKHS中使用旧表示构建的子空间之间的差异来优化L3模型。L3DMC结合了多个固定曲率空间的表示能力,在高维RKHS上执行,能够更好地适应新知识而不忘记旧知识。在三个基准上进行的深入实验证明了我们提出的蒸馏策略在L3设置下用于医学图像分类的有效性。我们的代码实现可以在https://github.com/csiro-robotics/L3DMC上公开获得。
{"title":"L3DMC: Lifelong Learning using Distillation via Mixed-Curvature Space","authors":"Kaushik Roy, Peyman Moghadam, Mehrtash Harandi","doi":"10.48550/arXiv.2307.16459","DOIUrl":"https://doi.org/10.48550/arXiv.2307.16459","url":null,"abstract":"The performance of a lifelong learning (L3) model degrades when it is trained on a series of tasks, as the geometrical formation of the embedding space changes while learning novel concepts sequentially. The majority of existing L3 approaches operate on a fixed-curvature (e.g., zero-curvature Euclidean) space that is not necessarily suitable for modeling the complex geometric structure of data. Furthermore, the distillation strategies apply constraints directly on low-dimensional embeddings, discouraging the L3 model from learning new concepts by making the model highly stable. To address the problem, we propose a distillation strategy named L3DMC that operates on mixed-curvature spaces to preserve the already-learned knowledge by modeling and maintaining complex geometrical structures. We propose to embed the projected low dimensional embedding of fixed-curvature spaces (Euclidean and hyperbolic) to higher-dimensional Reproducing Kernel Hilbert Space (RKHS) using a positive-definite kernel function to attain rich representation. Afterward, we optimize the L3 model by minimizing the discrepancies between the new sample representation and the subspace constructed using the old representation in RKHS. L3DMC is capable of adapting new knowledge better without forgetting old knowledge as it combines the representation power of multiple fixed-curvature spaces and is performed on higher-dimensional RKHS. Thorough experiments on three benchmarks demonstrate the effectiveness of our proposed distillation strategy for medical image classification in L3 settings. Our code implementation is publicly available at https://github.com/csiro-robotics/L3DMC.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"12 1","pages":"123-133"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88870206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain Adaptation for Medical Image Segmentation using Transformation-Invariant Self-Training 基于变换不变自训练的医学图像分割领域自适应
Negin Ghamsarian, Javier Gamazo Tejero, Pablo Márquez-Neila, S. Wolf, M. Zinkernagel, K. Schoeffmann, R. Sznitman
Models capable of leveraging unlabelled data are crucial in overcoming large distribution gaps between the acquired datasets across different imaging devices and configurations. In this regard, self-training techniques based on pseudo-labeling have been shown to be highly effective for semi-supervised domain adaptation. However, the unreliability of pseudo labels can hinder the capability of self-training techniques to induce abstract representation from the unlabeled target dataset, especially in the case of large distribution gaps. Since the neural network performance should be invariant to image transformations, we look to this fact to identify uncertain pseudo labels. Indeed, we argue that transformation invariant detections can provide more reasonable approximations of ground truth. Accordingly, we propose a semi-supervised learning strategy for domain adaptation termed transformation-invariant self-training (TI-ST). The proposed method assesses pixel-wise pseudo-labels' reliability and filters out unreliable detections during self-training. We perform comprehensive evaluations for domain adaptation using three different modalities of medical images, two different network architectures, and several alternative state-of-the-art domain adaptation methods. Experimental results confirm the superiority of our proposed method in mitigating the lack of target domain annotation and boosting segmentation performance in the target domain.
能够利用未标记数据的模型对于克服跨不同成像设备和配置的采集数据集之间的巨大分布差距至关重要。在这方面,基于伪标记的自我训练技术已被证明对半监督域适应非常有效。然而,伪标签的不可靠性会阻碍自我训练技术从未标记的目标数据集中归纳出抽象表示的能力,特别是在分布差距很大的情况下。由于神经网络的性能对图像变换应该是不变的,我们根据这一事实来识别不确定的伪标签。事实上,我们认为变换不变检测可以提供更合理的近似的基础真理。因此,我们提出了一种半监督学习策略,称为变换不变自训练(TI-ST)。该方法评估逐像素伪标签的可靠性,并在自我训练过程中过滤掉不可靠的检测。我们使用三种不同的医学图像模式、两种不同的网络架构和几种替代的最先进的领域自适应方法对领域自适应进行了全面的评估。实验结果证实了该方法在缓解目标域标注不足和提高目标域分割性能方面的优越性。
{"title":"Domain Adaptation for Medical Image Segmentation using Transformation-Invariant Self-Training","authors":"Negin Ghamsarian, Javier Gamazo Tejero, Pablo Márquez-Neila, S. Wolf, M. Zinkernagel, K. Schoeffmann, R. Sznitman","doi":"10.48550/arXiv.2307.16660","DOIUrl":"https://doi.org/10.48550/arXiv.2307.16660","url":null,"abstract":"Models capable of leveraging unlabelled data are crucial in overcoming large distribution gaps between the acquired datasets across different imaging devices and configurations. In this regard, self-training techniques based on pseudo-labeling have been shown to be highly effective for semi-supervised domain adaptation. However, the unreliability of pseudo labels can hinder the capability of self-training techniques to induce abstract representation from the unlabeled target dataset, especially in the case of large distribution gaps. Since the neural network performance should be invariant to image transformations, we look to this fact to identify uncertain pseudo labels. Indeed, we argue that transformation invariant detections can provide more reasonable approximations of ground truth. Accordingly, we propose a semi-supervised learning strategy for domain adaptation termed transformation-invariant self-training (TI-ST). The proposed method assesses pixel-wise pseudo-labels' reliability and filters out unreliable detections during self-training. We perform comprehensive evaluations for domain adaptation using three different modalities of medical images, two different network architectures, and several alternative state-of-the-art domain adaptation methods. Experimental results confirm the superiority of our proposed method in mitigating the lack of target domain annotation and boosting segmentation performance in the target domain.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"76 1","pages":"331-341"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86311229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Dataset Adaptation for Instrument Classification in Cataract Surgery Videos 白内障手术视频中器械分类的跨数据集自适应
Jay N. Paranjape, S. Sikder, Vishal M. Patel, S. Vedula
Surgical tool presence detection is an important part of the intra-operative and post-operative analysis of a surgery. State-of-the-art models, which perform this task well on a particular dataset, however, perform poorly when tested on another dataset. This occurs due to a significant domain shift between the datasets resulting from the use of different tools, sensors, data resolution etc. In this paper, we highlight this domain shift in the commonly performed cataract surgery and propose a novel end-to-end Unsupervised Domain Adaptation (UDA) method called the Barlow Adaptor that addresses the problem of distribution shift without requiring any labels from another domain. In addition, we introduce a novel loss called the Barlow Feature Alignment Loss (BFAL) which aligns features across different domains while reducing redundancy and the need for higher batch sizes, thus improving cross-dataset performance. The use of BFAL is a novel approach to address the challenge of domain shift in cataract surgery data. Extensive experiments are conducted on two cataract surgery datasets and it is shown that the proposed method outperforms the state-of-the-art UDA methods by 6%. The code can be found at https://github.com/JayParanjape/Barlow-Adaptor
手术工具存在检测是手术术中及术后分析的重要组成部分。最先进的模型在特定数据集上执行得很好,但是在另一个数据集上测试时表现不佳。这是由于使用不同的工具、传感器、数据分辨率等导致的数据集之间的显著域转移。在本文中,我们强调了常见白内障手术中的这种域转移,并提出了一种新的端到端无监督域适应(UDA)方法,称为Barlow Adaptor,该方法解决了分布转移问题,而不需要任何来自另一个域的标签。此外,我们还引入了一种新的损失,称为Barlow特征对齐损失(BFAL),它可以跨不同域对齐特征,同时减少冗余和对更高批处理大小的需求,从而提高跨数据集的性能。使用BFAL是一种解决白内障手术数据领域转移挑战的新方法。在两个白内障手术数据集上进行了大量的实验,结果表明,所提出的方法比最先进的UDA方法高出6%。代码可以在https://github.com/JayParanjape/Barlow-Adaptor上找到
{"title":"Cross-Dataset Adaptation for Instrument Classification in Cataract Surgery Videos","authors":"Jay N. Paranjape, S. Sikder, Vishal M. Patel, S. Vedula","doi":"10.48550/arXiv.2308.04035","DOIUrl":"https://doi.org/10.48550/arXiv.2308.04035","url":null,"abstract":"Surgical tool presence detection is an important part of the intra-operative and post-operative analysis of a surgery. State-of-the-art models, which perform this task well on a particular dataset, however, perform poorly when tested on another dataset. This occurs due to a significant domain shift between the datasets resulting from the use of different tools, sensors, data resolution etc. In this paper, we highlight this domain shift in the commonly performed cataract surgery and propose a novel end-to-end Unsupervised Domain Adaptation (UDA) method called the Barlow Adaptor that addresses the problem of distribution shift without requiring any labels from another domain. In addition, we introduce a novel loss called the Barlow Feature Alignment Loss (BFAL) which aligns features across different domains while reducing redundancy and the need for higher batch sizes, thus improving cross-dataset performance. The use of BFAL is a novel approach to address the challenge of domain shift in cataract surgery data. Extensive experiments are conducted on two cataract surgery datasets and it is shown that the proposed method outperforms the state-of-the-art UDA methods by 6%. The code can be found at https://github.com/JayParanjape/Barlow-Adaptor","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"722 1","pages":"739-748"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78745176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-Preserving Synthesis: MaskGAN for Unpaired MR-CT Translation 结构保留合成:用于未配对MR-CT翻译的MaskGAN
Minh Phan, Zhibin Liao, J. Verjans, Minh-Son To
Medical image synthesis is a challenging task due to the scarcity of paired data. Several methods have applied CycleGAN to leverage unpaired data, but they often generate inaccurate mappings that shift the anatomy. This problem is further exacerbated when the images from the source and target modalities are heavily misaligned. Recently, current methods have aimed to address this issue by incorporating a supplementary segmentation network. Unfortunately, this strategy requires costly and time-consuming pixel-level annotations. To overcome this problem, this paper proposes MaskGAN, a novel and cost-effective framework that enforces structural consistency by utilizing automatically extracted coarse masks. Our approach employs a mask generator to outline anatomical structures and a content generator to synthesize CT contents that align with these structures. Extensive experiments demonstrate that MaskGAN outperforms state-of-the-art synthesis methods on a challenging pediatric dataset, where MR and CT scans are heavily misaligned due to rapid growth in children. Specifically, MaskGAN excels in preserving anatomical structures without the need for expert annotations. The code for this paper can be found at https://github.com/HieuPhan33/MaskGAN.
由于配对数据的稀缺性,医学图像合成是一项具有挑战性的任务。有几种方法已经应用了CycleGAN来利用未配对的数据,但它们经常产生不准确的映射,从而改变了解剖结构。当来自源模态和目标模态的图像严重错位时,这个问题进一步加剧。最近,目前的方法旨在通过结合补充分割网络来解决这个问题。不幸的是,这种策略需要昂贵且耗时的像素级注释。为了克服这一问题,本文提出了一种新颖且经济的框架MaskGAN,该框架通过自动提取粗掩码来增强结构一致性。我们的方法使用一个掩模生成器来勾勒解剖结构,一个内容生成器来合成与这些结构对齐的CT内容。大量实验表明,在具有挑战性的儿童数据集上,MaskGAN优于最先进的合成方法,其中MR和CT扫描由于儿童的快速生长而严重错位。具体来说,MaskGAN擅长保存解剖结构,而不需要专家注释。本文的代码可以在https://github.com/HieuPhan33/MaskGAN上找到。
{"title":"Structure-Preserving Synthesis: MaskGAN for Unpaired MR-CT Translation","authors":"Minh Phan, Zhibin Liao, J. Verjans, Minh-Son To","doi":"10.48550/arXiv.2307.16143","DOIUrl":"https://doi.org/10.48550/arXiv.2307.16143","url":null,"abstract":"Medical image synthesis is a challenging task due to the scarcity of paired data. Several methods have applied CycleGAN to leverage unpaired data, but they often generate inaccurate mappings that shift the anatomy. This problem is further exacerbated when the images from the source and target modalities are heavily misaligned. Recently, current methods have aimed to address this issue by incorporating a supplementary segmentation network. Unfortunately, this strategy requires costly and time-consuming pixel-level annotations. To overcome this problem, this paper proposes MaskGAN, a novel and cost-effective framework that enforces structural consistency by utilizing automatically extracted coarse masks. Our approach employs a mask generator to outline anatomical structures and a content generator to synthesize CT contents that align with these structures. Extensive experiments demonstrate that MaskGAN outperforms state-of-the-art synthesis methods on a challenging pediatric dataset, where MR and CT scans are heavily misaligned due to rapid growth in children. Specifically, MaskGAN excels in preserving anatomical structures without the need for expert annotations. The code for this paper can be found at https://github.com/HieuPhan33/MaskGAN.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"432 1","pages":"56-65"},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77390693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching between 3D and 2D Networks 基于三维和二维网络交叉教学的稀疏标注三维医学图像分割
Heng Cai, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao
Medical image segmentation typically necessitates a large and precisely annotated dataset. However, obtaining pixel-wise annotation is a labor-intensive task that requires significant effort from domain experts, making it challenging to obtain in practical clinical scenarios. In such situations, reducing the amount of annotation required is a more practical approach. One feasible direction is sparse annotation, which involves annotating only a few slices, and has several advantages over traditional weak annotation methods such as bounding boxes and scribbles, as it preserves exact boundaries. However, learning from sparse annotation is challenging due to the scarcity of supervision signals. To address this issue, we propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks. Considering the characteristic of these networks, we develop two pseudo label selection strategies, which are hard-soft confidence threshold and consistent label fusion. Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods. Moreover, our approach achieves results that are comparable to the fully-supervised upper bound result.
医学图像分割通常需要一个大而精确注释的数据集。然而,获得逐像素注释是一项劳动密集型任务,需要领域专家付出大量努力,这使得在实际临床场景中获得它具有挑战性。在这种情况下,减少所需注释的数量是一种更实用的方法。一个可行的方向是稀疏注释,它只涉及注释几个片,并且比传统的弱注释方法(如边界框和涂鸦)有几个优点,因为它保留了精确的边界。然而,由于监督信号的稀缺性,从稀疏注释中学习是具有挑战性的。为了解决这个问题,我们提出了一个框架,可以使用3D和2D网络的交叉教学从稀疏注释中鲁棒学习。考虑到这些网络的特点,我们提出了两种伪标签选择策略,即软硬置信度阈值和一致标签融合。我们在MMWHS数据集上的实验结果表明,我们的方法优于最先进的(SOTA)半监督分割方法。此外,我们的方法获得了与完全监督上界结果相当的结果。
{"title":"3D Medical Image Segmentation with Sparse Annotation via Cross-Teaching between 3D and 2D Networks","authors":"Heng Cai, Lei Qi, Qian Yu, Yinghuan Shi, Yang Gao","doi":"10.48550/arXiv.2307.16256","DOIUrl":"https://doi.org/10.48550/arXiv.2307.16256","url":null,"abstract":"Medical image segmentation typically necessitates a large and precisely annotated dataset. However, obtaining pixel-wise annotation is a labor-intensive task that requires significant effort from domain experts, making it challenging to obtain in practical clinical scenarios. In such situations, reducing the amount of annotation required is a more practical approach. One feasible direction is sparse annotation, which involves annotating only a few slices, and has several advantages over traditional weak annotation methods such as bounding boxes and scribbles, as it preserves exact boundaries. However, learning from sparse annotation is challenging due to the scarcity of supervision signals. To address this issue, we propose a framework that can robustly learn from sparse annotation using the cross-teaching of both 3D and 2D networks. Considering the characteristic of these networks, we develop two pseudo label selection strategies, which are hard-soft confidence threshold and consistent label fusion. Our experimental results on the MMWHS dataset demonstrate that our method outperforms the state-of-the-art (SOTA) semi-supervised segmentation methods. Moreover, our approach achieves results that are comparable to the fully-supervised upper bound result.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"18 1","pages":"614-624"},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76751790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1