首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Edge-aware Multi-task Network for Integrating Quantification Segmentation and Uncertainty Prediction of Liver Tumor on Multi-modality Non-contrast MRI 多模态非对比MRI肝肿瘤定量分割与不确定性预测集成的边缘感知多任务网络
Xiaojiao Xiao, Qinmin Hu, Guanghui Wang
Simultaneous multi-index quantification, segmentation, and uncertainty estimation of liver tumors on multi-modality non-contrast magnetic resonance imaging (NCMRI) are crucial for accurate diagnosis. However, existing methods lack an effective mechanism for multi-modality NCMRI fusion and accurate boundary information capture, making these tasks challenging. To address these issues, this paper proposes a unified framework, namely edge-aware multi-task network (EaMtNet), to associate multi-index quantification, segmentation, and uncertainty of liver tumors on the multi-modality NCMRI. The EaMtNet employs two parallel CNN encoders and the Sobel filters to extract local features and edge maps, respectively. The newly designed edge-aware feature aggregation module (EaFA) is used for feature fusion and selection, making the network edge-aware by capturing long-range dependency between feature and edge maps. Multi-tasking leverages prediction discrepancy to estimate uncertainty and improve segmentation and quantification performance. Extensive experiments are performed on multi-modality NCMRI with 250 clinical subjects. The proposed model outperforms the state-of-the-art by a large margin, achieving a dice similarity coefficient of 90.01$pm$1.23 and a mean absolute error of 2.72$pm$0.58 mm for MD. The results demonstrate the potential of EaMtNet as a reliable clinical-aided tool for medical image analysis.
多模态非对比磁共振成像(NCMRI)对肝脏肿瘤的同时多指标量化、分割和不确定度估计是准确诊断的关键。然而,现有方法缺乏多模态NCMRI融合和精确边界信息捕获的有效机制,使得这些任务具有挑战性。针对这些问题,本文提出了一个统一的框架,即边缘感知多任务网络(edge-aware multi-task network, EaMtNet),在多模态NCMRI上关联肝脏肿瘤的多指标量化、分割和不确定性。EaMtNet采用两个并行CNN编码器和Sobel滤波器分别提取局部特征和边缘映射。新设计的边缘感知特征聚合模块(EaFA)用于特征融合和选择,通过捕获特征和边缘映射之间的远程依赖关系,使网络具有边缘感知。多任务利用预测差异来估计不确定性,提高分割和量化性能。在250名临床受试者的多模态NCMRI上进行了广泛的实验。所提出的模型在很大程度上超过了最先进的模型,实现了90.01$pm$1.23的骰子相似系数和2.72$pm$0.58 mm的平均绝对误差。结果表明EaMtNet作为可靠的临床辅助医学图像分析工具的潜力。
{"title":"Edge-aware Multi-task Network for Integrating Quantification Segmentation and Uncertainty Prediction of Liver Tumor on Multi-modality Non-contrast MRI","authors":"Xiaojiao Xiao, Qinmin Hu, Guanghui Wang","doi":"10.48550/arXiv.2307.01798","DOIUrl":"https://doi.org/10.48550/arXiv.2307.01798","url":null,"abstract":"Simultaneous multi-index quantification, segmentation, and uncertainty estimation of liver tumors on multi-modality non-contrast magnetic resonance imaging (NCMRI) are crucial for accurate diagnosis. However, existing methods lack an effective mechanism for multi-modality NCMRI fusion and accurate boundary information capture, making these tasks challenging. To address these issues, this paper proposes a unified framework, namely edge-aware multi-task network (EaMtNet), to associate multi-index quantification, segmentation, and uncertainty of liver tumors on the multi-modality NCMRI. The EaMtNet employs two parallel CNN encoders and the Sobel filters to extract local features and edge maps, respectively. The newly designed edge-aware feature aggregation module (EaFA) is used for feature fusion and selection, making the network edge-aware by capturing long-range dependency between feature and edge maps. Multi-tasking leverages prediction discrepancy to estimate uncertainty and improve segmentation and quantification performance. Extensive experiments are performed on multi-modality NCMRI with 250 clinical subjects. The proposed model outperforms the state-of-the-art by a large margin, achieving a dice similarity coefficient of 90.01$pm$1.23 and a mean absolute error of 2.72$pm$0.58 mm for MD. The results demonstrate the potential of EaMtNet as a reliable clinical-aided tool for medical image analysis.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"13 1","pages":"652-661"},"PeriodicalIF":0.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75993678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis 减轻无固定属性分组的校准偏差,提高医学成像分析的公平性
Changjian Shui, Justin Szeto, Raghav Mehta, Douglas Arnold, T. Arbel
Trustworthy deployment of deep learning medical imaging models into real-world clinical practice requires that they be calibrated. However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model. Although methods have been shown to successfully mitigate biases across subgroups in terms of model accuracy, this work focuses on the open problem of mitigating calibration biases in the context of medical image analysis. Our method does not require subgroup attributes during training, permitting the flexibility to mitigate biases for different choices of sensitive attributes without re-training. To this end, we propose a novel two-stage method: Cluster-Focal to first identify poorly calibrated samples, cluster them into groups, and then introduce group-wise focal loss to improve calibration bias. We evaluate our method on skin lesion classification with the public HAM10000 dataset, and on predicting future lesional activity for multiple sclerosis (MS) patients. In addition to considering traditional sensitive attributes (e.g. age, sex) with demographic subgroups, we also consider biases among groups with different image-derived attributes, such as lesion load, which are required in medical image analysis. Our results demonstrate that our method effectively controls calibration error in the worst-performing subgroups while preserving prediction performance, and outperforming recent baselines.
将深度学习医学成像模型可靠地部署到现实世界的临床实践中需要对其进行校准。然而,总体上校准良好的模型对于一个亚群体仍然可能校准得很差,这可能导致临床医生根据模型的建议在不知不觉中为这个群体做出错误的决定。虽然方法已被证明可以成功地减轻模型准确性方面的子组偏差,但本工作侧重于减轻医学图像分析背景下校准偏差的开放问题。我们的方法在训练过程中不需要子组属性,允许灵活地减轻不同敏感属性选择的偏差,而无需重新训练。为此,我们提出了一种新的两阶段方法:cluster - focal,首先识别校准不良的样本,将它们聚类,然后引入分组焦点损失来改善校准偏差。我们使用公共HAM10000数据集评估了我们的皮肤病变分类方法,以及预测多发性硬化症(MS)患者未来病变活动的方法。除了考虑人口统计亚组的传统敏感属性(如年龄、性别)外,我们还考虑了医学图像分析中需要的不同图像派生属性(如病变负荷)的群体之间的偏差。我们的结果表明,我们的方法有效地控制了表现最差的子组的校准误差,同时保持了预测性能,并且优于最近的基线。
{"title":"Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis","authors":"Changjian Shui, Justin Szeto, Raghav Mehta, Douglas Arnold, T. Arbel","doi":"10.48550/arXiv.2307.01738","DOIUrl":"https://doi.org/10.48550/arXiv.2307.01738","url":null,"abstract":"Trustworthy deployment of deep learning medical imaging models into real-world clinical practice requires that they be calibrated. However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model. Although methods have been shown to successfully mitigate biases across subgroups in terms of model accuracy, this work focuses on the open problem of mitigating calibration biases in the context of medical image analysis. Our method does not require subgroup attributes during training, permitting the flexibility to mitigate biases for different choices of sensitive attributes without re-training. To this end, we propose a novel two-stage method: Cluster-Focal to first identify poorly calibrated samples, cluster them into groups, and then introduce group-wise focal loss to improve calibration bias. We evaluate our method on skin lesion classification with the public HAM10000 dataset, and on predicting future lesional activity for multiple sclerosis (MS) patients. In addition to considering traditional sensitive attributes (e.g. age, sex) with demographic subgroups, we also consider biases among groups with different image-derived attributes, such as lesion load, which are required in medical image analysis. Our results demonstrate that our method effectively controls calibration error in the worst-performing subgroups while preserving prediction performance, and outperforming recent baselines.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"2017 1","pages":"189-198"},"PeriodicalIF":0.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73929508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
H-DenseFormer: An Efficient Hybrid Densely Connected Transformer for Multimodal Tumor Segmentation H-DenseFormer:一种高效的混合密集连接变压器用于多模态肿瘤分割
Jun Shi, Hongyu Kan, Shulan Ruan, Ziqi Zhu, Minfan Zhao, Liang Qiao, Zhaohui Wang, Hong An, Xudong Xue
Recently, deep learning methods have been widely used for tumor segmentation of multimodal medical images with promising results. However, most existing methods are limited by insufficient representational ability, specific modality number and high computational complexity. In this paper, we propose a hybrid densely connected network for tumor segmentation, named H-DenseFormer, which combines the representational power of the Convolutional Neural Network (CNN) and the Transformer structures. Specifically, H-DenseFormer integrates a Transformer-based Multi-path Parallel Embedding (MPE) module that can take an arbitrary number of modalities as input to extract the fusion features from different modalities. Then, the multimodal fusion features are delivered to different levels of the encoder to enhance multimodal learning representation. Besides, we design a lightweight Densely Connected Transformer (DCT) block to replace the standard Transformer block, thus significantly reducing computational complexity. We conduct extensive experiments on two public multimodal datasets, HECKTOR21 and PI-CAI22. The experimental results show that our proposed method outperforms the existing state-of-the-art methods while having lower computational complexity. The source code is available at https://github.com/shijun18/H-DenseFormer.
近年来,深度学习方法被广泛应用于多模态医学图像的肿瘤分割,并取得了良好的效果。然而,现有的方法大多存在表征能力不足、特定模态数和计算复杂度高等问题。在本文中,我们提出了一种混合密集连接网络用于肿瘤分割,称为H-DenseFormer,它结合了卷积神经网络(CNN)和Transformer结构的表示能力。具体来说,H-DenseFormer集成了一个基于变压器的多路径并行嵌入(MPE)模块,该模块可以将任意数量的模态作为输入,从不同的模态中提取融合特征。然后,将多模态融合特征传递到编码器的不同层次,以增强多模态学习表征。此外,我们设计了一个轻量级的密集连接变压器(DCT)模块来取代标准的变压器模块,从而大大降低了计算复杂度。我们在HECKTOR21和PI-CAI22两个公共多模态数据集上进行了广泛的实验。实验结果表明,该方法具有较低的计算复杂度,且性能优于现有的先进方法。源代码可从https://github.com/shijun18/H-DenseFormer获得。
{"title":"H-DenseFormer: An Efficient Hybrid Densely Connected Transformer for Multimodal Tumor Segmentation","authors":"Jun Shi, Hongyu Kan, Shulan Ruan, Ziqi Zhu, Minfan Zhao, Liang Qiao, Zhaohui Wang, Hong An, Xudong Xue","doi":"10.48550/arXiv.2307.01486","DOIUrl":"https://doi.org/10.48550/arXiv.2307.01486","url":null,"abstract":"Recently, deep learning methods have been widely used for tumor segmentation of multimodal medical images with promising results. However, most existing methods are limited by insufficient representational ability, specific modality number and high computational complexity. In this paper, we propose a hybrid densely connected network for tumor segmentation, named H-DenseFormer, which combines the representational power of the Convolutional Neural Network (CNN) and the Transformer structures. Specifically, H-DenseFormer integrates a Transformer-based Multi-path Parallel Embedding (MPE) module that can take an arbitrary number of modalities as input to extract the fusion features from different modalities. Then, the multimodal fusion features are delivered to different levels of the encoder to enhance multimodal learning representation. Besides, we design a lightweight Densely Connected Transformer (DCT) block to replace the standard Transformer block, thus significantly reducing computational complexity. We conduct extensive experiments on two public multimodal datasets, HECKTOR21 and PI-CAI22. The experimental results show that our proposed method outperforms the existing state-of-the-art methods while having lower computational complexity. The source code is available at https://github.com/shijun18/H-DenseFormer.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"20 1","pages":"692-702"},"PeriodicalIF":0.0,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90949288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Explainable Deep Framework: Towards Task-Specific Fusion for Multi-to-One MRI Synthesis 一个可解释的深度框架:迈向多对一MRI合成的特定任务融合
Luyi Han, Tianyu Zhang, Yunzhi Huang, Haoran Dou, Xin Wang, Yuan Gao, Chun-Ta Lu, Tan Tao, R. Mann
Multi-sequence MRI is valuable in clinical settings for reliable diagnosis and treatment prognosis, but some sequences may be unusable or missing for various reasons. To address this issue, MRI synthesis is a potential solution. Recent deep learning-based methods have achieved good performance in combining multiple available sequences for missing sequence synthesis. Despite their success, these methods lack the ability to quantify the contributions of different input sequences and estimate the quality of generated images, making it hard to be practical. Hence, we propose an explainable task-specific synthesis network, which adapts weights automatically for specific sequence generation tasks and provides interpretability and reliability from two sides: (1) visualize the contribution of each input sequence in the fusion stage by a trainable task-specific weighted average module; (2) highlight the area the network tried to refine during synthesizing by a task-specific attention module. We conduct experiments on the BraTS2021 dataset of 1251 subjects, and results on arbitrary sequence synthesis indicate that the proposed method achieves better performance than the state-of-the-art methods. Our code is available at url{https://github.com/fiy2W/mri_seq2seq}.
多序列MRI在临床诊断和治疗预后方面具有重要价值,但由于各种原因,一些序列可能无法使用或缺失。为了解决这个问题,MRI合成是一个潜在的解决方案。近年来基于深度学习的方法在组合多个可用序列进行缺失序列合成方面取得了较好的效果。尽管取得了成功,但这些方法缺乏量化不同输入序列的贡献和估计生成图像质量的能力,使其难以实用。因此,我们提出了一种可解释的特定任务合成网络,该网络可自动为特定的序列生成任务调整权重,并从两个方面提供可解释性和可靠性:(1)通过可训练的特定任务加权平均模块可视化融合阶段每个输入序列的贡献;(2)通过特定任务注意模块,突出网络在合成过程中试图细化的区域。我们在1251名受试者的BraTS2021数据集上进行了实验,在任意序列合成上的结果表明,所提出的方法比目前的方法取得了更好的性能。我们的代码可在url{https://github.com/fiy2W/mri_seq2seq}上获得。
{"title":"An Explainable Deep Framework: Towards Task-Specific Fusion for Multi-to-One MRI Synthesis","authors":"Luyi Han, Tianyu Zhang, Yunzhi Huang, Haoran Dou, Xin Wang, Yuan Gao, Chun-Ta Lu, Tan Tao, R. Mann","doi":"10.48550/arXiv.2307.00885","DOIUrl":"https://doi.org/10.48550/arXiv.2307.00885","url":null,"abstract":"Multi-sequence MRI is valuable in clinical settings for reliable diagnosis and treatment prognosis, but some sequences may be unusable or missing for various reasons. To address this issue, MRI synthesis is a potential solution. Recent deep learning-based methods have achieved good performance in combining multiple available sequences for missing sequence synthesis. Despite their success, these methods lack the ability to quantify the contributions of different input sequences and estimate the quality of generated images, making it hard to be practical. Hence, we propose an explainable task-specific synthesis network, which adapts weights automatically for specific sequence generation tasks and provides interpretability and reliability from two sides: (1) visualize the contribution of each input sequence in the fusion stage by a trainable task-specific weighted average module; (2) highlight the area the network tried to refine during synthesizing by a task-specific attention module. We conduct experiments on the BraTS2021 dataset of 1251 subjects, and results on arbitrary sequence synthesis indicate that the proposed method achieves better performance than the state-of-the-art methods. Our code is available at url{https://github.com/fiy2W/mri_seq2seq}.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"130 1","pages":"45-55"},"PeriodicalIF":0.0,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75120314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Many tasks make light work: Learning to localise medical anomalies from multiple synthetic tasks 许多任务都很容易完成:学习从多个合成任务中定位医学异常
Matthew Baugh, Jeremy Tan, Johanna P. Muller, Mischa Dombrowski, James Batten, Bernhard Kainz
There is a growing interest in single-class modelling and out-of-distribution detection as fully supervised machine learning models cannot reliably identify classes not included in their training. The long tail of infinitely many out-of-distribution classes in real-world scenarios, e.g., for screening, triage, and quality control, means that it is often necessary to train single-class models that represent an expected feature distribution, e.g., from only strictly healthy volunteer data. Conventional supervised machine learning would require the collection of datasets that contain enough samples of all possible diseases in every imaging modality, which is not realistic. Self-supervised learning methods with synthetic anomalies are currently amongst the most promising approaches, alongside generative auto-encoders that analyse the residual reconstruction error. However, all methods suffer from a lack of structured validation, which makes calibration for deployment difficult and dataset-dependant. Our method alleviates this by making use of multiple visually-distinct synthetic anomaly learning tasks for both training and validation. This enables more robust training and generalisation. With our approach we can readily outperform state-of-the-art methods, which we demonstrate on exemplars in brain MRI and chest X-rays. Code is available at https://github.com/matt-baugh/many-tasks-make-light-work .
由于完全监督的机器学习模型不能可靠地识别未包含在其训练中的类,因此对单类建模和分布外检测的兴趣越来越大。在现实场景中,无限多个分布外类的长尾,例如用于筛选、分类和质量控制,意味着通常需要训练代表预期特征分布的单类模型,例如,仅从严格健康的志愿者数据中进行训练。传统的监督式机器学习需要收集包含每种成像模式中所有可能疾病的足够样本的数据集,这是不现实的。具有合成异常的自监督学习方法是目前最有前途的方法之一,此外还有分析残差重建误差的生成式自编码器。然而,所有的方法都缺乏结构化的验证,这使得部署的校准变得困难并且依赖于数据集。我们的方法通过在训练和验证中使用多个视觉上不同的合成异常学习任务来缓解这一问题。这使得更健壮的训练和泛化成为可能。通过我们的方法,我们可以很容易地超越最先进的方法,我们在脑MRI和胸部x光片上展示了这些方法。代码可从https://github.com/matt-baugh/many-tasks-make-light-work获得。
{"title":"Many tasks make light work: Learning to localise medical anomalies from multiple synthetic tasks","authors":"Matthew Baugh, Jeremy Tan, Johanna P. Muller, Mischa Dombrowski, James Batten, Bernhard Kainz","doi":"10.48550/arXiv.2307.00899","DOIUrl":"https://doi.org/10.48550/arXiv.2307.00899","url":null,"abstract":"There is a growing interest in single-class modelling and out-of-distribution detection as fully supervised machine learning models cannot reliably identify classes not included in their training. The long tail of infinitely many out-of-distribution classes in real-world scenarios, e.g., for screening, triage, and quality control, means that it is often necessary to train single-class models that represent an expected feature distribution, e.g., from only strictly healthy volunteer data. Conventional supervised machine learning would require the collection of datasets that contain enough samples of all possible diseases in every imaging modality, which is not realistic. Self-supervised learning methods with synthetic anomalies are currently amongst the most promising approaches, alongside generative auto-encoders that analyse the residual reconstruction error. However, all methods suffer from a lack of structured validation, which makes calibration for deployment difficult and dataset-dependant. Our method alleviates this by making use of multiple visually-distinct synthetic anomaly learning tasks for both training and validation. This enables more robust training and generalisation. With our approach we can readily outperform state-of-the-art methods, which we demonstrate on exemplars in brain MRI and chest X-rays. Code is available at https://github.com/matt-baugh/many-tasks-make-light-work .","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"56 1","pages":"162-172"},"PeriodicalIF":0.0,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85394326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Subclass Segmentation in Medical Images 医学图像的高效子类分割
Linrui Dai, Wenhui Lei, Xiaofan Zhang
As research interests in medical image analysis become increasingly fine-grained, the cost for extensive annotation also rises. One feasible way to reduce the cost is to annotate with coarse-grained superclass labels while using limited fine-grained annotations as a complement. In this way, fine-grained data learning is assisted by ample coarse annotations. Recent studies in classification tasks have adopted this method to achieve satisfactory results. However, there is a lack of research on efficient learning of fine-grained subclasses in semantic segmentation tasks. In this paper, we propose a novel approach that leverages the hierarchical structure of categories to design network architecture. Meanwhile, a task-driven data generation method is presented to make it easier for the network to recognize different subclass categories. Specifically, we introduce a Prior Concatenation module that enhances confidence in subclass segmentation by concatenating predicted logits from the superclass classifier, a Separate Normalization module that stretches the intra-class distance within the same superclass to facilitate subclass segmentation, and a HierarchicalMix model that generates high-quality pseudo labels for unlabeled samples by fusing only similar superclass regions from labeled and unlabeled images. Our experiments on the BraTS2021 and ACDC datasets demonstrate that our approach achieves comparable accuracy to a model trained with full subclass annotations, with limited subclass annotations and sufficient superclass annotations. Our approach offers a promising solution for efficient fine-grained subclass segmentation in medical images. Our code is publicly available here.
随着医学图像分析的研究兴趣越来越细,大量注释的成本也在上升。降低成本的一种可行方法是使用粗粒度的超类标签进行注释,同时使用有限的细粒度注释作为补充。通过这种方式,细粒度的数据学习得到了大量粗标注的辅助。近年来对分类任务的研究都采用了这种方法,取得了令人满意的结果。然而,对于语义分割任务中细粒度子类的高效学习,目前还缺乏相关研究。在本文中,我们提出了一种利用类别的层次结构来设计网络架构的新方法。同时,提出了一种任务驱动的数据生成方法,使网络更容易识别不同的子类类别。具体来说,我们引入了一个Prior concatation模块,通过连接来自超类分类器的预测logits来增强子类分割的信心;一个Separate Normalization模块,在同一超类中扩展类内距离以促进子类分割;一个HierarchicalMix模型,通过仅融合来自标记和未标记图像的相似超类区域,为未标记的样本生成高质量的伪标签。我们在BraTS2021和ACDC数据集上的实验表明,我们的方法达到了与使用完整子类注释、有限子类注释和足够超类注释训练的模型相当的精度。我们的方法为医学图像中有效的细粒度子类分割提供了一个很有前景的解决方案。我们的代码在这里是公开的。
{"title":"Efficient Subclass Segmentation in Medical Images","authors":"Linrui Dai, Wenhui Lei, Xiaofan Zhang","doi":"10.48550/arXiv.2307.00257","DOIUrl":"https://doi.org/10.48550/arXiv.2307.00257","url":null,"abstract":"As research interests in medical image analysis become increasingly fine-grained, the cost for extensive annotation also rises. One feasible way to reduce the cost is to annotate with coarse-grained superclass labels while using limited fine-grained annotations as a complement. In this way, fine-grained data learning is assisted by ample coarse annotations. Recent studies in classification tasks have adopted this method to achieve satisfactory results. However, there is a lack of research on efficient learning of fine-grained subclasses in semantic segmentation tasks. In this paper, we propose a novel approach that leverages the hierarchical structure of categories to design network architecture. Meanwhile, a task-driven data generation method is presented to make it easier for the network to recognize different subclass categories. Specifically, we introduce a Prior Concatenation module that enhances confidence in subclass segmentation by concatenating predicted logits from the superclass classifier, a Separate Normalization module that stretches the intra-class distance within the same superclass to facilitate subclass segmentation, and a HierarchicalMix model that generates high-quality pseudo labels for unlabeled samples by fusing only similar superclass regions from labeled and unlabeled images. Our experiments on the BraTS2021 and ACDC datasets demonstrate that our approach achieves comparable accuracy to a model trained with full subclass annotations, with limited subclass annotations and sufficient superclass annotations. Our approach offers a promising solution for efficient fine-grained subclass segmentation in medical images. Our code is publicly available here.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"130 1","pages":"266-275"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86363943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-Preserving Diffusion Model for Unsupervised AS-OCT image Despeckling 无监督AS-OCT图像去斑的内容保持扩散模型
Sanqian Li, Risa Higashita, Huazhu Fu, Heng Li, Jingxuan Liu, Jiang Liu
Anterior segment optical coherence tomography (AS-OCT) is a non-invasive imaging technique that is highly valuable for ophthalmic diagnosis. However, speckles in AS-OCT images can often degrade the image quality and affect clinical analysis. As a result, removing speckles in AS-OCT images can greatly benefit automatic ophthalmology analysis. Unfortunately, challenges still exist in deploying effective AS-OCT image denoising algorithms, including collecting sufficient paired training data and the requirement to preserve consistent content in medical images. To address these practical issues, we propose an unsupervised AS-OCT despeckling algorithm via Content Preserving Diffusion Model (CPDM) with statistical knowledge. At the training stage, a Markov chain transforms clean images to white Gaussian noise by repeatedly adding random noise and removes the predicted noise in a reverse procedure. At the inference stage, we first analyze the statistical distribution of speckles and convert it into a Gaussian distribution, aiming to match the fast truncated reverse diffusion process. We then explore the posterior distribution of observed images as a fidelity term to ensure content consistency in the iterative procedure. Our experimental results show that CPDM significantly improves image quality compared to competitive methods. Furthermore, we validate the benefits of CPDM for subsequent clinical analysis, including ciliary muscle (CM) segmentation and scleral spur (SS) localization.
前段光学相干断层扫描(AS-OCT)是一种非侵入性成像技术,在眼科诊断中具有很高的价值。然而,AS-OCT图像中的斑点往往会降低图像质量并影响临床分析。因此,去除As - oct图像中的斑点可以极大地促进眼科自动分析。不幸的是,在部署有效的AS-OCT图像去噪算法方面仍然存在挑战,包括收集足够的成对训练数据和保持医学图像中一致内容的要求。为了解决这些实际问题,我们提出了一种基于内容保持扩散模型(CPDM)的无监督AS-OCT去斑算法。在训练阶段,马尔可夫链通过反复添加随机噪声将干净图像转换为高斯白噪声,并通过反向过程去除预测噪声。在推理阶段,我们首先分析散斑的统计分布,并将其转换为高斯分布,以匹配快速截断的反向扩散过程。然后,我们探索观察图像的后验分布作为保真度项,以确保迭代过程中的内容一致性。我们的实验结果表明,与竞争方法相比,CPDM显著提高了图像质量。此外,我们验证了CPDM在后续临床分析中的益处,包括睫状肌(CM)分割和巩膜骨刺(SS)定位。
{"title":"Content-Preserving Diffusion Model for Unsupervised AS-OCT image Despeckling","authors":"Sanqian Li, Risa Higashita, Huazhu Fu, Heng Li, Jingxuan Liu, Jiang Liu","doi":"10.48550/arXiv.2306.17717","DOIUrl":"https://doi.org/10.48550/arXiv.2306.17717","url":null,"abstract":"Anterior segment optical coherence tomography (AS-OCT) is a non-invasive imaging technique that is highly valuable for ophthalmic diagnosis. However, speckles in AS-OCT images can often degrade the image quality and affect clinical analysis. As a result, removing speckles in AS-OCT images can greatly benefit automatic ophthalmology analysis. Unfortunately, challenges still exist in deploying effective AS-OCT image denoising algorithms, including collecting sufficient paired training data and the requirement to preserve consistent content in medical images. To address these practical issues, we propose an unsupervised AS-OCT despeckling algorithm via Content Preserving Diffusion Model (CPDM) with statistical knowledge. At the training stage, a Markov chain transforms clean images to white Gaussian noise by repeatedly adding random noise and removes the predicted noise in a reverse procedure. At the inference stage, we first analyze the statistical distribution of speckles and convert it into a Gaussian distribution, aiming to match the fast truncated reverse diffusion process. We then explore the posterior distribution of observed images as a fidelity term to ensure content consistency in the iterative procedure. Our experimental results show that CPDM significantly improves image quality compared to competitive methods. Furthermore, we validate the benefits of CPDM for subsequent clinical analysis, including ciliary muscle (CM) segmentation and scleral spur (SS) localization.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"72 1","pages":"660-670"},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86351285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SimPLe: Similarity-Aware Propagation Learning for Weakly-Supervised Breast Cancer Segmentation in DCE-MRI 基于相似性感知传播学习的弱监督乳腺癌DCE-MRI分割
Yu-Min Zhong, Yi Wang
Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an important role in the screening and prognosis assessment of high-risk breast cancer. The segmentation of cancerous regions is essential useful for the subsequent analysis of breast MRI. To alleviate the annotation effort to train the segmentation networks, we propose a weakly-supervised strategy using extreme points as annotations for breast cancer segmentation. Without using any bells and whistles, our strategy focuses on fully exploiting the learning capability of the routine training procedure, i.e., the train - fine-tune - retrain process. The network first utilizes the pseudo-masks generated using the extreme points to train itself, by minimizing a contrastive loss, which encourages the network to learn more representative features for cancerous voxels. Then the trained network fine-tunes itself by using a similarity-aware propagation learning (SimPLe) strategy, which leverages feature similarity between unlabeled and positive voxels to propagate labels. Finally the network retrains itself by employing the pseudo-masks generated using previous fine-tuned network. The proposed method is evaluated on our collected DCE-MRI dataset containing 206 patients with biopsy-proven breast cancers. Experimental results demonstrate our method effectively fine-tunes the network by using the SimPLe strategy, and achieves a mean Dice value of 81%.
乳腺动态对比增强磁共振成像(DCE-MRI)在高危乳腺癌的筛查和预后评估中发挥着重要作用。癌变区域的分割对乳腺MRI的后续分析至关重要。为了减轻标注训练分割网络的工作量,我们提出了一种弱监督策略,使用极值点作为乳腺癌分割的标注。我们的策略不使用任何花哨的东西,而是专注于充分利用常规训练过程的学习能力,即训练-微调-再训练过程。网络首先利用利用极值点生成的伪掩模来训练自己,通过最小化对比损失,这鼓励网络学习更多具有代表性的癌体素特征。然后,训练后的网络通过使用相似感知传播学习(SimPLe)策略进行自我微调,该策略利用未标记体素和正体素之间的特征相似性来传播标签。最后,网络通过使用先前微调网络生成的伪掩码重新训练自己。我们收集了206例活检证实的乳腺癌患者的DCE-MRI数据集,对所提出的方法进行了评估。实验结果表明,该方法采用SimPLe策略对网络进行了有效的微调,达到了81%的平均Dice值。
{"title":"SimPLe: Similarity-Aware Propagation Learning for Weakly-Supervised Breast Cancer Segmentation in DCE-MRI","authors":"Yu-Min Zhong, Yi Wang","doi":"10.48550/arXiv.2306.16714","DOIUrl":"https://doi.org/10.48550/arXiv.2306.16714","url":null,"abstract":"Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an important role in the screening and prognosis assessment of high-risk breast cancer. The segmentation of cancerous regions is essential useful for the subsequent analysis of breast MRI. To alleviate the annotation effort to train the segmentation networks, we propose a weakly-supervised strategy using extreme points as annotations for breast cancer segmentation. Without using any bells and whistles, our strategy focuses on fully exploiting the learning capability of the routine training procedure, i.e., the train - fine-tune - retrain process. The network first utilizes the pseudo-masks generated using the extreme points to train itself, by minimizing a contrastive loss, which encourages the network to learn more representative features for cancerous voxels. Then the trained network fine-tunes itself by using a similarity-aware propagation learning (SimPLe) strategy, which leverages feature similarity between unlabeled and positive voxels to propagate labels. Finally the network retrains itself by employing the pseudo-masks generated using previous fine-tuned network. The proposed method is evaluated on our collected DCE-MRI dataset containing 206 patients with biopsy-proven breast cancers. Experimental results demonstrate our method effectively fine-tunes the network by using the SimPLe strategy, and achieves a mean Dice value of 81%.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"19 1","pages":"567-577"},"PeriodicalIF":0.0,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73990162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-IMU with Online Self-Consistency for Freehand 3D Ultrasound Reconstruction 具有在线自一致性的多imu手绘三维超声重建
Mingyuan Luo, Xin Yang, Zhongnuo Yan, Yuanji Zhang, Junyu Li, Jiongquan Chen, Xindi Hu, Jikuan Qian, Junda Cheng, Dong Ni
Ultrasound (US) imaging is a popular tool in clinical diagnosis, offering safety, repeatability, and real-time capabilities. Freehand 3D US is a technique that provides a deeper understanding of scanned regions without increasing complexity. However, estimating elevation displacement and accumulation error remains challenging, making it difficult to infer the relative position using images alone. The addition of external lightweight sensors has been proposed to enhance reconstruction performance without adding complexity, which has been shown to be beneficial. We propose a novel online self-consistency network (OSCNet) using multiple inertial measurement units (IMUs) to improve reconstruction performance. OSCNet utilizes a modal-level self-supervised strategy to fuse multiple IMU information and reduce differences between reconstruction results obtained from each IMU data. Additionally, a sequence-level self-consistency strategy is proposed to improve the hierarchical consistency of prediction results among the scanning sequence and its sub-sequences. Experiments on large-scale arm and carotid datasets with multiple scanning tactics demonstrate that our OSCNet outperforms previous methods, achieving state-of-the-art reconstruction performance.
超声(US)成像是临床诊断中流行的工具,具有安全性、可重复性和实时性。徒手3D US是一种技术,可以在不增加复杂性的情况下更深入地了解扫描区域。然而,估算高程位移和累积误差仍然具有挑战性,这使得仅使用图像推断相对位置变得困难。提出了在不增加复杂性的情况下增加外部轻量级传感器来提高重建性能,这是有益的。为了提高重建性能,我们提出了一种使用多个惯性测量单元(imu)的在线自洽网络(OSCNet)。OSCNet利用模型级自监督策略融合多个IMU信息,减少各IMU数据重建结果之间的差异。此外,提出了一种序列级自一致性策略,以提高扫描序列及其子序列之间预测结果的层次一致性。在具有多种扫描策略的大规模手臂和颈动脉数据集上的实验表明,我们的OSCNet优于以前的方法,实现了最先进的重建性能。
{"title":"Multi-IMU with Online Self-Consistency for Freehand 3D Ultrasound Reconstruction","authors":"Mingyuan Luo, Xin Yang, Zhongnuo Yan, Yuanji Zhang, Junyu Li, Jiongquan Chen, Xindi Hu, Jikuan Qian, Junda Cheng, Dong Ni","doi":"10.48550/arXiv.2306.16197","DOIUrl":"https://doi.org/10.48550/arXiv.2306.16197","url":null,"abstract":"Ultrasound (US) imaging is a popular tool in clinical diagnosis, offering safety, repeatability, and real-time capabilities. Freehand 3D US is a technique that provides a deeper understanding of scanned regions without increasing complexity. However, estimating elevation displacement and accumulation error remains challenging, making it difficult to infer the relative position using images alone. The addition of external lightweight sensors has been proposed to enhance reconstruction performance without adding complexity, which has been shown to be beneficial. We propose a novel online self-consistency network (OSCNet) using multiple inertial measurement units (IMUs) to improve reconstruction performance. OSCNet utilizes a modal-level self-supervised strategy to fuse multiple IMU information and reduce differences between reconstruction results obtained from each IMU data. Additionally, a sequence-level self-consistency strategy is proposed to improve the hierarchical consistency of prediction results among the scanning sequence and its sub-sequences. Experiments on large-scale arm and carotid datasets with multiple scanning tactics demonstrate that our OSCNet outperforms previous methods, achieving state-of-the-art reconstruction performance.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"1 1","pages":"342-351"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90223479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reconstructing the Hemodynamic Response Function via a Bimodal Transformer 利用双峰变压器重构血流动力学响应函数
Yoni Choukroun, Lior Golgher, P. Blinder, L. Wolf
The relationship between blood flow and neuronal activity is widely recognized, with blood flow frequently serving as a surrogate for neuronal activity in fMRI studies. At the microscopic level, neuronal activity has been shown to influence blood flow in nearby blood vessels. This study introduces the first predictive model that addresses this issue directly at the explicit neuronal population level. Using in vivo recordings in awake mice, we employ a novel spatiotemporal bimodal transformer architecture to infer current blood flow based on both historical blood flow and ongoing spontaneous neuronal activity. Our findings indicate that incorporating neuronal activity significantly enhances the model's ability to predict blood flow values. Through analysis of the model's behavior, we propose hypotheses regarding the largely unexplored nature of the hemodynamic response to neuronal activity.
血流与神经元活动之间的关系已得到广泛认可,在功能磁共振成像研究中,血流经常被用作神经元活动的替代指标。在显微水平上,神经元活动已被证明影响附近血管的血流。本研究引入了第一个直接在显式神经元种群水平上解决这一问题的预测模型。利用清醒小鼠的体内记录,我们采用了一种新的时空双峰变压器结构,根据历史血流量和正在进行的自发神经元活动来推断当前的血流量。我们的研究结果表明,结合神经元活动显著提高了模型预测血流量值的能力。通过对模型行为的分析,我们提出了关于神经元活动的血流动力学反应在很大程度上未被探索的性质的假设。
{"title":"Reconstructing the Hemodynamic Response Function via a Bimodal Transformer","authors":"Yoni Choukroun, Lior Golgher, P. Blinder, L. Wolf","doi":"10.48550/arXiv.2306.15971","DOIUrl":"https://doi.org/10.48550/arXiv.2306.15971","url":null,"abstract":"The relationship between blood flow and neuronal activity is widely recognized, with blood flow frequently serving as a surrogate for neuronal activity in fMRI studies. At the microscopic level, neuronal activity has been shown to influence blood flow in nearby blood vessels. This study introduces the first predictive model that addresses this issue directly at the explicit neuronal population level. Using in vivo recordings in awake mice, we employ a novel spatiotemporal bimodal transformer architecture to infer current blood flow based on both historical blood flow and ongoing spontaneous neuronal activity. Our findings indicate that incorporating neuronal activity significantly enhances the model's ability to predict blood flow values. Through analysis of the model's behavior, we propose hypotheses regarding the largely unexplored nature of the hemodynamic response to neuronal activity.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"57 1","pages":"371-381"},"PeriodicalIF":0.0,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80575167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1