首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Soft-tissue Driven Craniomaxillofacial Surgical Planning 软组织驱动颅颌面外科手术计划
Xi Fang, Daeseung Kim, Xuanang Xu, Tianshu Kuang, Nathan Lampen, Jungwook Lee, H. Deng, J. Gateno, M. Liebschner, J. Xia, Pingkun Yan
In CMF surgery, the planning of bony movement to achieve a desired facial outcome is a challenging task. Current bone driven approaches focus on normalizing the bone with the expectation that the facial appearance will be corrected accordingly. However, due to the complex non-linear relationship between bony structure and facial soft-tissue, such bone-driven methods are insufficient to correct facial deformities. Despite efforts to simulate facial changes resulting from bony movement, surgical planning still relies on iterative revisions and educated guesses. To address these issues, we propose a soft-tissue driven framework that can automatically create and verify surgical plans. Our framework consists of a bony planner network that estimates the bony movements required to achieve the desired facial outcome and a facial simulator network that can simulate the possible facial changes resulting from the estimated bony movement plans. By combining these two models, we can verify and determine the final bony movement required for planning. The proposed framework was evaluated using a clinical dataset, and our experimental results demonstrate that the soft-tissue driven approach greatly improves the accuracy and efficacy of surgical planning when compared to the conventional bone-driven approach.
在CMF手术中,规划骨骼运动以达到理想的面部结果是一项具有挑战性的任务。目前骨骼驱动的方法侧重于使骨骼正常化,并期望面部外观得到相应的纠正。然而,由于骨结构与面部软组织之间复杂的非线性关系,这种骨驱动的方法不足以矫正面部畸形。尽管努力模拟由骨骼运动引起的面部变化,但手术计划仍然依赖于反复修改和有根据的猜测。为了解决这些问题,我们提出了一个软组织驱动的框架,可以自动创建和验证手术计划。我们的框架包括一个骨骼计划器网络,用于估计实现预期面部结果所需的骨骼运动,以及一个面部模拟器网络,可以模拟由估计的骨骼运动计划引起的可能的面部变化。通过结合这两个模型,我们可以验证并确定规划所需的最终骨运动。我们使用临床数据集对所提出的框架进行了评估,实验结果表明,与传统的骨驱动入路相比,软组织驱动入路大大提高了手术计划的准确性和有效性。
{"title":"Soft-tissue Driven Craniomaxillofacial Surgical Planning","authors":"Xi Fang, Daeseung Kim, Xuanang Xu, Tianshu Kuang, Nathan Lampen, Jungwook Lee, H. Deng, J. Gateno, M. Liebschner, J. Xia, Pingkun Yan","doi":"10.48550/arXiv.2307.10954","DOIUrl":"https://doi.org/10.48550/arXiv.2307.10954","url":null,"abstract":"In CMF surgery, the planning of bony movement to achieve a desired facial outcome is a challenging task. Current bone driven approaches focus on normalizing the bone with the expectation that the facial appearance will be corrected accordingly. However, due to the complex non-linear relationship between bony structure and facial soft-tissue, such bone-driven methods are insufficient to correct facial deformities. Despite efforts to simulate facial changes resulting from bony movement, surgical planning still relies on iterative revisions and educated guesses. To address these issues, we propose a soft-tissue driven framework that can automatically create and verify surgical plans. Our framework consists of a bony planner network that estimates the bony movements required to achieve the desired facial outcome and a facial simulator network that can simulate the possible facial changes resulting from the estimated bony movement plans. By combining these two models, we can verify and determine the final bony movement required for planning. The proposed framework was evaluated using a clinical dataset, and our experimental results demonstrate that the soft-tissue driven approach greatly improves the accuracy and efficacy of surgical planning when compared to the conventional bone-driven approach.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"2 1","pages":"186-195"},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86569047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedSoup: Improving Generalization and Personalization in Federated Learning via Selective Model Interpolation FedSoup:通过选择性模型插值提高联邦学习的泛化和个性化
Minghui Chen, Meirui Jiang, Qianming Dou, Zehua Wang, Xiaoxiao Li
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers such as hospitals and clinical research laboratories. However, recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts. Specifically, personalized FL methods have a tendency to overfit to local data, leading to a sharp valley in the local model and inhibiting its ability to generalize to out-of-distribution data. In this paper, we propose a novel federated model soup method (i.e., selective interpolation of model parameters) to optimize the trade-off between local and global performance. Specifically, during the federated training phase, each client maintains its own global model pool by monitoring the performance of the interpolated model between the local and global models. This allows us to alleviate overfitting and seek flat minima, which can significantly improve the model's generalization performance. We evaluate our method on retinal and pathological image classification tasks, and our proposed method achieves significant improvements for out-of-distribution generalization. Our code is available at https://github.com/ubc-tea/FedSoup.
跨竖井联邦学习(FL)支持在医院和临床研究实验室等数据中心分布的数据集上开发机器学习模型。然而,最近的研究发现,当面对分布变化时,当前的FL算法面临着局部和全局性能之间的权衡。具体来说,个性化FL方法有过度拟合局部数据的倾向,导致局部模型出现陡谷,抑制了其推广到分布外数据的能力。在本文中,我们提出了一种新的联邦模型汤方法(即模型参数的选择性插值)来优化局部和全局性能之间的权衡。具体来说,在联邦训练阶段,每个客户机通过监视局部模型和全局模型之间的内插模型的性能来维护自己的全局模型池。这使我们能够缓解过拟合并寻求平坦最小值,这可以显着提高模型的泛化性能。我们在视网膜和病理图像分类任务中对我们的方法进行了评估,我们提出的方法在分布外泛化方面取得了显著的进步。我们的代码可在https://github.com/ubc-tea/FedSoup上获得。
{"title":"FedSoup: Improving Generalization and Personalization in Federated Learning via Selective Model Interpolation","authors":"Minghui Chen, Meirui Jiang, Qianming Dou, Zehua Wang, Xiaoxiao Li","doi":"10.48550/arXiv.2307.10507","DOIUrl":"https://doi.org/10.48550/arXiv.2307.10507","url":null,"abstract":"Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers such as hospitals and clinical research laboratories. However, recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts. Specifically, personalized FL methods have a tendency to overfit to local data, leading to a sharp valley in the local model and inhibiting its ability to generalize to out-of-distribution data. In this paper, we propose a novel federated model soup method (i.e., selective interpolation of model parameters) to optimize the trade-off between local and global performance. Specifically, during the federated training phase, each client maintains its own global model pool by monitoring the performance of the interpolated model between the local and global models. This allows us to alleviate overfitting and seek flat minima, which can significantly improve the model's generalization performance. We evaluate our method on retinal and pathological image classification tasks, and our proposed method achieves significant improvements for out-of-distribution generalization. Our code is available at https://github.com/ubc-tea/FedSoup.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"21 1","pages":"318-328"},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85432770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation 基于边缘估计的主动学习OCT分割方法
Md Abdul Kadir, Hasan Md Tusfiqur Alam, Daniel Sonntag
Active learning algorithms have become increasingly popular for training models with limited data. However, selecting data for annotation remains a challenging problem due to the limited information available on unseen data. To address this issue, we propose EdgeAL, which utilizes the edge information of unseen images as {it a priori} information for measuring uncertainty. The uncertainty is quantified by analyzing the divergence and entropy in model predictions across edges. This measure is then used to select superpixels for annotation. We demonstrate the effectiveness of EdgeAL on multi-class Optical Coherence Tomography (OCT) segmentation tasks, where we achieved a 99% dice score while reducing the annotation label cost to 12%, 2.3%, and 3%, respectively, on three publicly available datasets (Duke, AROI, and UMN). The source code is available at url{https://github.com/Mak-Ta-Reque/EdgeAL}
主动学习算法在训练数据有限的模型方面越来越受欢迎。然而,选择数据进行注释仍然是一个具有挑战性的问题,因为在不可见的数据上可用的信息有限。为了解决这个问题,我们提出了edal,它利用未见图像的边缘信息作为测量不确定性的{it先验}信息。通过分析模型沿边预测的散度和熵来量化不确定性。然后使用该度量来选择用于注释的超像素。我们证明了edal在多类光学相干断层扫描(OCT)分割任务中的有效性,其中我们实现了99% dice score while reducing the annotation label cost to 12%, 2.3%, and 3%, respectively, on three publicly available datasets (Duke, AROI, and UMN). The source code is available at url{https://github.com/Mak-Ta-Reque/EdgeAL}
{"title":"EdgeAL: An Edge Estimation Based Active Learning Approach for OCT Segmentation","authors":"Md Abdul Kadir, Hasan Md Tusfiqur Alam, Daniel Sonntag","doi":"10.48550/arXiv.2307.10745","DOIUrl":"https://doi.org/10.48550/arXiv.2307.10745","url":null,"abstract":"Active learning algorithms have become increasingly popular for training models with limited data. However, selecting data for annotation remains a challenging problem due to the limited information available on unseen data. To address this issue, we propose EdgeAL, which utilizes the edge information of unseen images as {it a priori} information for measuring uncertainty. The uncertainty is quantified by analyzing the divergence and entropy in model predictions across edges. This measure is then used to select superpixels for annotation. We demonstrate the effectiveness of EdgeAL on multi-class Optical Coherence Tomography (OCT) segmentation tasks, where we achieved a 99% dice score while reducing the annotation label cost to 12%, 2.3%, and 3%, respectively, on three publicly available datasets (Duke, AROI, and UMN). The source code is available at url{https://github.com/Mak-Ta-Reque/EdgeAL}","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"3 1","pages":"79-89"},"PeriodicalIF":0.0,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80801800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAMConvex: Fast Discrete Optimization for CT Registration using Self-supervised Anatomical Embedding and Correlation Pyramid SAMConvex:基于自监督解剖嵌入和相关金字塔的CT配准快速离散优化
Zi Li, Lin Tian, Tony C. W. Mok, Xiaoyu Bai, Puyang Wang, J. Ge, Jingren Zhou, Le Lu, X. Ye, K. Yan, D. Jin
Estimating displacement vector field via a cost volume computed in the feature space has shown great success in image registration, but it suffers excessive computation burdens. Moreover, existing feature descriptors only extract local features incapable of representing the global semantic information, which is especially important for solving large transformations. To address the discussed issues, we propose SAMConvex, a fast coarse-to-fine discrete optimization method for CT registration that includes a decoupled convex optimization procedure to obtain deformation fields based on a self-supervised anatomical embedding (SAM) feature extractor that captures both local and global information. To be specific, SAMConvex extracts per-voxel features and builds 6D correlation volumes based on SAM features, and iteratively updates a flow field by performing lookups on the correlation volumes with a coarse-to-fine scheme. SAMConvex outperforms the state-of-the-art learning-based methods and optimization-based methods over two inter-patient registration datasets (Abdomen CT and HeadNeck CT) and one intra-patient registration dataset (Lung CT). Moreover, as an optimization-based method, SAMConvex only takes $sim2$s ($sim5s$ with instance optimization) for one paired images.
利用特征空间中计算的代价体积估计位移向量场在图像配准中取得了很大的成功,但计算量过大。此外,现有的特征描述符只能提取局部特征,无法表示全局语义信息,这对于解决大型转换尤为重要。为了解决所讨论的问题,我们提出了SAMConvex,这是一种用于CT配准的快速粗到细离散优化方法,其中包括一个解耦的凸优化过程,以获得基于自监督解剖嵌入(SAM)特征提取器的变形场,该特征提取器可以捕获局部和全局信息。具体而言,SAMConvex提取每体素特征,并基于SAM特征构建6D相关体,并通过对相关体进行查找,以粗到精的方式迭代更新流场。SAMConvex在两个患者间注册数据集(腹部CT和头颈CT)和一个患者内部注册数据集(肺部CT)上优于最先进的基于学习的方法和基于优化的方法。此外,作为一种基于优化的方法,SAMConvex只对一对图像取$sim2$s(实例优化后的$sim5s$)。
{"title":"SAMConvex: Fast Discrete Optimization for CT Registration using Self-supervised Anatomical Embedding and Correlation Pyramid","authors":"Zi Li, Lin Tian, Tony C. W. Mok, Xiaoyu Bai, Puyang Wang, J. Ge, Jingren Zhou, Le Lu, X. Ye, K. Yan, D. Jin","doi":"10.48550/arXiv.2307.09727","DOIUrl":"https://doi.org/10.48550/arXiv.2307.09727","url":null,"abstract":"Estimating displacement vector field via a cost volume computed in the feature space has shown great success in image registration, but it suffers excessive computation burdens. Moreover, existing feature descriptors only extract local features incapable of representing the global semantic information, which is especially important for solving large transformations. To address the discussed issues, we propose SAMConvex, a fast coarse-to-fine discrete optimization method for CT registration that includes a decoupled convex optimization procedure to obtain deformation fields based on a self-supervised anatomical embedding (SAM) feature extractor that captures both local and global information. To be specific, SAMConvex extracts per-voxel features and builds 6D correlation volumes based on SAM features, and iteratively updates a flow field by performing lookups on the correlation volumes with a coarse-to-fine scheme. SAMConvex outperforms the state-of-the-art learning-based methods and optimization-based methods over two inter-patient registration datasets (Abdomen CT and HeadNeck CT) and one intra-patient registration dataset (Lung CT). Moreover, as an optimization-based method, SAMConvex only takes $sim2$s ($sim5s$ with instance optimization) for one paired images.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"30 1","pages":"559-569"},"PeriodicalIF":0.0,"publicationDate":"2023-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85085223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
TractCloud: Registration-free tractography parcellation with a novel local-global streamline point cloud representation TractCloud:使用新颖的局部-全局流线型点云表示的免配准的轨迹图分割
Tengfei Xue, Yuqian Chen, Chaoyi Zhang, A. Golby, N. Makris, Y. Rathi, Weidong (Tom) Cai, Fan Zhang, L. O’Donnell
Diffusion MRI tractography parcellation classifies streamlines into anatomical fiber tracts to enable quantification and visualization for clinical and scientific applications. Current tractography parcellation methods rely heavily on registration, but registration inaccuracies can affect parcellation and the computational cost of registration is high for large-scale datasets. Recently, deep-learning-based methods have been proposed for tractography parcellation using various types of representations for streamlines. However, these methods only focus on the information from a single streamline, ignoring geometric relationships between the streamlines in the brain. We propose TractCloud, a registration-free framework that performs whole-brain tractography parcellation directly in individual subject space. We propose a novel, learnable, local-global streamline representation that leverages information from neighboring and whole-brain streamlines to describe the local anatomy and global pose of the brain. We train our framework on a large-scale labeled tractography dataset, which we augment by applying synthetic transforms including rotation, scaling, and translations. We test our framework on five independently acquired datasets across populations and health conditions. TractCloud significantly outperforms several state-of-the-art methods on all testing datasets. TractCloud achieves efficient and consistent whole-brain white matter parcellation across the lifespan (from neonates to elderly subjects, including brain tumor patients) without the need for registration. The robustness and high inference speed of TractCloud make it suitable for large-scale tractography data analysis. Our project page is available at https://tractcloud.github.io/.
弥散MRI纤维束成像将流线分割成解剖纤维束,使临床和科学应用的量化和可视化成为可能。目前的束状图分割方法严重依赖于配准,但配准不准确会影响分割,而且对于大规模数据集,配准的计算成本很高。最近,人们提出了基于深度学习的方法,使用各种类型的流线表示来进行轨迹图分割。然而,这些方法只关注来自单一流线的信息,忽略了大脑流线之间的几何关系。我们提出了TractCloud,这是一个无配准的框架,可以直接在单个主题空间中执行全脑束图分割。我们提出了一种新颖的,可学习的,局部-全局流线表示,利用来自邻近和全脑流线的信息来描述大脑的局部解剖和全局姿态。我们在大规模标记的轨迹图数据集上训练我们的框架,我们通过应用包括旋转、缩放和平移在内的综合变换来增强该数据集。我们在涉及人口和健康状况的五个独立获得的数据集上测试了我们的框架。在所有测试数据集上,TractCloud的性能明显优于几种最先进的方法。TractCloud在整个生命周期(从新生儿到老年受试者,包括脑肿瘤患者)中实现高效和一致的全脑白质包裹,而无需注册。TractCloud的鲁棒性和高推理速度使其适合于大规模的轨迹数据分析。我们的项目页面可访问https://tractcloud.github.io/。
{"title":"TractCloud: Registration-free tractography parcellation with a novel local-global streamline point cloud representation","authors":"Tengfei Xue, Yuqian Chen, Chaoyi Zhang, A. Golby, N. Makris, Y. Rathi, Weidong (Tom) Cai, Fan Zhang, L. O’Donnell","doi":"10.48550/arXiv.2307.09000","DOIUrl":"https://doi.org/10.48550/arXiv.2307.09000","url":null,"abstract":"Diffusion MRI tractography parcellation classifies streamlines into anatomical fiber tracts to enable quantification and visualization for clinical and scientific applications. Current tractography parcellation methods rely heavily on registration, but registration inaccuracies can affect parcellation and the computational cost of registration is high for large-scale datasets. Recently, deep-learning-based methods have been proposed for tractography parcellation using various types of representations for streamlines. However, these methods only focus on the information from a single streamline, ignoring geometric relationships between the streamlines in the brain. We propose TractCloud, a registration-free framework that performs whole-brain tractography parcellation directly in individual subject space. We propose a novel, learnable, local-global streamline representation that leverages information from neighboring and whole-brain streamlines to describe the local anatomy and global pose of the brain. We train our framework on a large-scale labeled tractography dataset, which we augment by applying synthetic transforms including rotation, scaling, and translations. We test our framework on five independently acquired datasets across populations and health conditions. TractCloud significantly outperforms several state-of-the-art methods on all testing datasets. TractCloud achieves efficient and consistent whole-brain white matter parcellation across the lifespan (from neonates to elderly subjects, including brain tumor patients) without the need for registration. The robustness and high inference speed of TractCloud make it suitable for large-scale tractography data analysis. Our project page is available at https://tractcloud.github.io/.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"22 1","pages":"409-419"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90758185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smooth Attention for Deep Multiple Instance Learning: Application to CT Intracranial Hemorrhage Detection 平滑注意力深度多实例学习在CT颅内出血检测中的应用
Yunan Wu, Francisco M. Castro-Mac'ias, Pablo Morales-Álvarez, R. Molina, A. Katsaggelos
Multiple Instance Learning (MIL) has been widely applied to medical imaging diagnosis, where bag labels are known and instance labels inside bags are unknown. Traditional MIL assumes that instances in each bag are independent samples from a given distribution. However, instances are often spatially or sequentially ordered, and one would expect similar diagnostic importance for neighboring instances. To address this, in this study, we propose a smooth attention deep MIL (SA-DMIL) model. Smoothness is achieved by the introduction of first and second order constraints on the latent function encoding the attention paid to each instance in a bag. The method is applied to the detection of intracranial hemorrhage (ICH) on head CT scans. The results show that this novel SA-DMIL: (a) achieves better performance than the non-smooth attention MIL at both scan (bag) and slice (instance) levels; (b) learns spatial dependencies between slices; and (c) outperforms current state-of-the-art MIL methods on the same ICH test set.
多实例学习(Multiple Instance Learning, MIL)已广泛应用于医学影像诊断中,其中包装袋标签是已知的,而包装袋内的实例标签是未知的。传统的MIL假设每个包中的实例是来自给定分布的独立样本。然而,实例通常在空间上或顺序上是有序的,人们期望相邻实例具有类似的诊断重要性。为了解决这个问题,在本研究中,我们提出了一个平滑注意深度MIL (sa - dil)模型。平滑是通过在隐函数上引入一阶和二阶约束来实现的,隐函数编码了对包中每个实例的关注。将该方法应用于头部CT扫描中颅内出血的检测。结果表明:(a)在扫描(包)和切片(实例)两个层面上,这种新的SA-DMIL都比非平滑注意mmil具有更好的性能;(b)学习切片之间的空间依赖关系;(c)在相同的ICH测试集上优于当前最先进的MIL方法。
{"title":"Smooth Attention for Deep Multiple Instance Learning: Application to CT Intracranial Hemorrhage Detection","authors":"Yunan Wu, Francisco M. Castro-Mac'ias, Pablo Morales-Álvarez, R. Molina, A. Katsaggelos","doi":"10.48550/arXiv.2307.09457","DOIUrl":"https://doi.org/10.48550/arXiv.2307.09457","url":null,"abstract":"Multiple Instance Learning (MIL) has been widely applied to medical imaging diagnosis, where bag labels are known and instance labels inside bags are unknown. Traditional MIL assumes that instances in each bag are independent samples from a given distribution. However, instances are often spatially or sequentially ordered, and one would expect similar diagnostic importance for neighboring instances. To address this, in this study, we propose a smooth attention deep MIL (SA-DMIL) model. Smoothness is achieved by the introduction of first and second order constraints on the latent function encoding the attention paid to each instance in a bag. The method is applied to the detection of intracranial hemorrhage (ICH) on head CT scans. The results show that this novel SA-DMIL: (a) achieves better performance than the non-smooth attention MIL at both scan (bag) and slice (instance) levels; (b) learns spatial dependencies between slices; and (c) outperforms current state-of-the-art MIL methods on the same ICH test set.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"336 1","pages":"327-337"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76369458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
You've Got Two Teachers: Co-evolutionary Image and Report Distillation for Semi-supervised Anatomical Abnormality Detection in Chest X-ray 你有两个老师:用于胸部x线半监督解剖异常检测的协同进化图像和报告蒸馏
J. Sun, Dong Wei, Zhe Xu, Donghuan Lu, Hong Liu, Liansheng Wang, Yefeng Zheng
Chest X-ray (CXR) anatomical abnormality detection aims at localizing and characterising cardiopulmonary radiological findings in the radiographs, which can expedite clinical workflow and reduce observational oversights. Most existing methods attempted this task in either fully supervised settings which demanded costly mass per-abnormality annotations, or weakly supervised settings which still lagged badly behind fully supervised methods in performance. In this work, we propose a co-evolutionary image and report distillation (CEIRD) framework, which approaches semi-supervised abnormality detection in CXR by grounding the visual detection results with text-classified abnormalities from paired radiology reports, and vice versa. Concretely, based on the classical teacher-student pseudo label distillation (TSD) paradigm, we additionally introduce an auxiliary report classification model, whose prediction is used for report-guided pseudo detection label refinement (RPDLR) in the primary vision detection task. Inversely, we also use the prediction of the vision detection model for abnormality-guided pseudo classification label refinement (APCLR) in the auxiliary report classification task, and propose a co-evolution strategy where the vision and report models mutually promote each other with RPDLR and APCLR performed alternatively. To this end, we effectively incorporate the weak supervision by reports into the semi-supervised TSD pipeline. Besides the cross-modal pseudo label refinement, we further propose an intra-image-modal self-adaptive non-maximum suppression, where the pseudo detection labels generated by the teacher vision model are dynamically rectified by high-confidence predictions by the student. Experimental results on the public MIMIC-CXR benchmark demonstrate CEIRD's superior performance to several up-to-date weakly and semi-supervised methods.
胸部x线(CXR)解剖异常检测旨在定位和表征x线片上的心肺影像学表现,从而加快临床工作流程并减少观察疏忽。大多数现有的方法都是在完全监督的情况下进行的,这需要大量的异常注释,或者是在弱监督的情况下,这在性能上仍然远远落后于完全监督的方法。在这项工作中,我们提出了一种协同进化图像和报告蒸馏(CEIRD)框架,该框架通过将视觉检测结果与配对放射学报告中的文本分类异常结合起来,从而实现CXR中的半监督异常检测,反之亦然。具体而言,在经典师生伪标签蒸馏(TSD)范式的基础上,引入了辅助报告分类模型,将该模型的预测用于初级视觉检测任务中报告导向的伪检测标签细化(RPDLR)。相反,我们还在辅助报告分类任务中使用了异常引导伪分类标签细化(APCLR)的视觉检测模型预测,并提出了一种视觉和报告模型相互促进,RPDLR和APCLR交替执行的协同进化策略。为此,我们将报告弱监管有效地纳入半监管的TSD管道。除了跨模态伪标签细化之外,我们进一步提出了一种图像模态内自适应非最大值抑制,其中教师视觉模型生成的伪检测标签通过学生的高置信度预测动态校正。在公开的MIMIC-CXR基准上的实验结果表明,CEIRD的性能优于几种最新的弱监督和半监督方法。
{"title":"You've Got Two Teachers: Co-evolutionary Image and Report Distillation for Semi-supervised Anatomical Abnormality Detection in Chest X-ray","authors":"J. Sun, Dong Wei, Zhe Xu, Donghuan Lu, Hong Liu, Liansheng Wang, Yefeng Zheng","doi":"10.48550/arXiv.2307.09184","DOIUrl":"https://doi.org/10.48550/arXiv.2307.09184","url":null,"abstract":"Chest X-ray (CXR) anatomical abnormality detection aims at localizing and characterising cardiopulmonary radiological findings in the radiographs, which can expedite clinical workflow and reduce observational oversights. Most existing methods attempted this task in either fully supervised settings which demanded costly mass per-abnormality annotations, or weakly supervised settings which still lagged badly behind fully supervised methods in performance. In this work, we propose a co-evolutionary image and report distillation (CEIRD) framework, which approaches semi-supervised abnormality detection in CXR by grounding the visual detection results with text-classified abnormalities from paired radiology reports, and vice versa. Concretely, based on the classical teacher-student pseudo label distillation (TSD) paradigm, we additionally introduce an auxiliary report classification model, whose prediction is used for report-guided pseudo detection label refinement (RPDLR) in the primary vision detection task. Inversely, we also use the prediction of the vision detection model for abnormality-guided pseudo classification label refinement (APCLR) in the auxiliary report classification task, and propose a co-evolution strategy where the vision and report models mutually promote each other with RPDLR and APCLR performed alternatively. To this end, we effectively incorporate the weak supervision by reports into the semi-supervised TSD pipeline. Besides the cross-modal pseudo label refinement, we further propose an intra-image-modal self-adaptive non-maximum suppression, where the pseudo detection labels generated by the teacher vision model are dynamically rectified by high-confidence predictions by the student. Experimental results on the public MIMIC-CXR benchmark demonstrate CEIRD's superior performance to several up-to-date weakly and semi-supervised methods.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"72 1","pages":"363-373"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80432710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Surgical Action Triplet Detection by Mixed Supervised Learning of Instrument-Tissue Interactions 基于器械-组织相互作用的混合监督学习的手术动作三重态检测
Saurav Sharma, C. Nwoye, D. Mutter, N. Padoy
Surgical action triplets describe instrument-tissue interactions as (instrument, verb, target) combinations, thereby supporting a detailed analysis of surgical scene activities and workflow. This work focuses on surgical action triplet detection, which is challenging but more precise than the traditional triplet recognition task as it consists of joint (1) localization of surgical instruments and (2) recognition of the surgical action triplet associated with every localized instrument. Triplet detection is highly complex due to the lack of spatial triplet annotation. We analyze how the amount of instrument spatial annotations affects triplet detection and observe that accurate instrument localization does not guarantee better triplet detection due to the risk of erroneous associations with the verbs and targets. To solve the two tasks, we propose MCIT-IG, a two-stage network, that stands for Multi-Class Instrument-aware Transformer-Interaction Graph. The MCIT stage of our network models per class embedding of the targets as additional features to reduce the risk of misassociating triplets. Furthermore, the IG stage constructs a bipartite dynamic graph to model the interaction between the instruments and targets, cast as the verbs. We utilize a mixed-supervised learning strategy that combines weak target presence labels for MCIT and pseudo triplet labels for IG to train our network. We observed that complementing minimal instrument spatial annotations with target embeddings results in better triplet detection. We evaluate our model on the CholecT50 dataset and show improved performance on both instrument localization and triplet detection, topping the leaderboard of the CholecTriplet challenge in MICCAI 2022.
手术动作三联体将器械与组织的相互作用描述为(器械、动作、靶标)组合,从而支持对手术场景活动和工作流程的详细分析。这项工作的重点是手术动作三联体检测,这是具有挑战性的,但比传统的三联体识别任务更精确,因为它包括联合(1)手术器械的定位和(2)识别与每个定位器械相关的手术动作三联体。由于缺乏空间三重态标注,三重态检测非常复杂。我们分析了仪器空间注释的数量如何影响三联体检测,并观察到准确的仪器定位并不能保证更好的三联体检测,因为存在与动词和目标错误关联的风险。为了解决这两个问题,我们提出了MCIT-IG,一个两阶段网络,代表多类仪器感知变压器交互图。我们的网络模型的MCIT阶段每个类嵌入的目标作为额外的特征,以减少错误关联的风险三元组。此外,IG阶段构建了一个二部动态图来建模工具和目标之间的交互,作为动词。我们使用混合监督学习策略,结合MCIT的弱目标存在标签和IG的伪三重标签来训练我们的网络。我们观察到,将最小的仪器空间注释与目标嵌入相补充,可以更好地检测三重态。我们在CholecT50数据集上评估了我们的模型,并在仪器定位和三重检测方面显示出改进的性能,在MICCAI 2022的CholecTriplet挑战中名列前茅。
{"title":"Surgical Action Triplet Detection by Mixed Supervised Learning of Instrument-Tissue Interactions","authors":"Saurav Sharma, C. Nwoye, D. Mutter, N. Padoy","doi":"10.48550/arXiv.2307.09548","DOIUrl":"https://doi.org/10.48550/arXiv.2307.09548","url":null,"abstract":"Surgical action triplets describe instrument-tissue interactions as (instrument, verb, target) combinations, thereby supporting a detailed analysis of surgical scene activities and workflow. This work focuses on surgical action triplet detection, which is challenging but more precise than the traditional triplet recognition task as it consists of joint (1) localization of surgical instruments and (2) recognition of the surgical action triplet associated with every localized instrument. Triplet detection is highly complex due to the lack of spatial triplet annotation. We analyze how the amount of instrument spatial annotations affects triplet detection and observe that accurate instrument localization does not guarantee better triplet detection due to the risk of erroneous associations with the verbs and targets. To solve the two tasks, we propose MCIT-IG, a two-stage network, that stands for Multi-Class Instrument-aware Transformer-Interaction Graph. The MCIT stage of our network models per class embedding of the targets as additional features to reduce the risk of misassociating triplets. Furthermore, the IG stage constructs a bipartite dynamic graph to model the interaction between the instruments and targets, cast as the verbs. We utilize a mixed-supervised learning strategy that combines weak target presence labels for MCIT and pseudo triplet labels for IG to train our network. We observed that complementing minimal instrument spatial annotations with target embeddings results in better triplet detection. We evaluate our model on the CholecT50 dataset and show improved performance on both instrument localization and triplet detection, topping the leaderboard of the CholecTriplet challenge in MICCAI 2022.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"48 1","pages":"505-514"},"PeriodicalIF":0.0,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91297899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M-FLAG: Medical Vision-Language Pre-training with Frozen Language Models and Latent Space Geometry Optimization M-FLAG:基于冻结语言模型和潜在空间几何优化的医学视觉语言预训练
Che Liu, Sibo Cheng, Chen Chen, Mengyun Qiao, Weitong Zhang, Anand Shah, Wenjia Bai, Rossella Arcucci
Medical vision-language models enable co-learning and integrating features from medical imaging and clinical text. However, these models are not easy to train and the latent representation space can be complex. Here we propose a novel way for pre-training and regularising medical vision-language models. The proposed method, named Medical vision-language pre-training with Frozen language models and Latent spAce Geometry optimization (M-FLAG), leverages a frozen language model for training stability and efficiency and introduces a novel orthogonality loss to harmonize the latent space geometry. We demonstrate the potential of the pre-trained model on three downstream tasks: medical image classification, segmentation, and object detection. Extensive experiments across five public datasets demonstrate that M-FLAG significantly outperforms existing medical vision-language pre-training approaches and reduces the number of parameters by 78%. Notably, M-FLAG achieves outstanding performance on the segmentation task while using only 1% of the RSNA dataset, even outperforming ImageNet pre-trained models that have been fine-tuned using 100% of the data.
医学视觉语言模型使共同学习和整合医学影像和临床文本的特征成为可能。然而,这些模型不容易训练,并且潜在表示空间可能很复杂。本文提出了一种新的医学视觉语言模型的预训练和正则化方法。本文提出的基于冻结语言模型和潜在空间几何优化(M-FLAG)的医学视觉语言预训练方法,利用冻结语言模型提高训练的稳定性和效率,并引入一种新的正交性损失来协调潜在空间几何。我们展示了预训练模型在三个下游任务上的潜力:医学图像分类、分割和目标检测。在五个公共数据集上进行的大量实验表明,M-FLAG显著优于现有的医学视觉语言预训练方法,并将参数数量减少了78%。值得注意的是,M-FLAG仅使用1%的RSNA数据集就在分割任务上取得了出色的性能,甚至超过了使用100%的数据进行微调的ImageNet预训练模型。
{"title":"M-FLAG: Medical Vision-Language Pre-training with Frozen Language Models and Latent Space Geometry Optimization","authors":"Che Liu, Sibo Cheng, Chen Chen, Mengyun Qiao, Weitong Zhang, Anand Shah, Wenjia Bai, Rossella Arcucci","doi":"10.48550/arXiv.2307.08347","DOIUrl":"https://doi.org/10.48550/arXiv.2307.08347","url":null,"abstract":"Medical vision-language models enable co-learning and integrating features from medical imaging and clinical text. However, these models are not easy to train and the latent representation space can be complex. Here we propose a novel way for pre-training and regularising medical vision-language models. The proposed method, named Medical vision-language pre-training with Frozen language models and Latent spAce Geometry optimization (M-FLAG), leverages a frozen language model for training stability and efficiency and introduces a novel orthogonality loss to harmonize the latent space geometry. We demonstrate the potential of the pre-trained model on three downstream tasks: medical image classification, segmentation, and object detection. Extensive experiments across five public datasets demonstrate that M-FLAG significantly outperforms existing medical vision-language pre-training approaches and reduces the number of parameters by 78%. Notably, M-FLAG achieves outstanding performance on the segmentation task while using only 1% of the RSNA dataset, even outperforming ImageNet pre-trained models that have been fine-tuned using 100% of the data.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"16 1","pages":"637-647"},"PeriodicalIF":0.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81892954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Novel Multi-Task Model Imitating Dermatologists for Accurate Differential Diagnosis of Skin Diseases in Clinical Images 一种模仿皮肤科医生的多任务模型用于临床图像中皮肤病的准确鉴别诊断
Yan Zhou, Wei Liu, Yuan Gao, Jingyi Xu, Lexian Lu, Yu Duan, Hao Cheng, Na Jin, Xiaoyong Man, Shuang Zhao, Yu Wang
Skin diseases are among the most prevalent health issues, and accurate computer-aided diagnosis methods are of importance for both dermatologists and patients. However, most of the existing methods overlook the essential domain knowledge required for skin disease diagnosis. A novel multi-task model, namely DermImitFormer, is proposed to fill this gap by imitating dermatologists' diagnostic procedures and strategies. Through multi-task learning, the model simultaneously predicts body parts and lesion attributes in addition to the disease itself, enhancing diagnosis accuracy and improving diagnosis interpretability. The designed lesion selection module mimics dermatologists' zoom-in action, effectively highlighting the local lesion features from noisy backgrounds. Additionally, the presented cross-interaction module explicitly models the complicated diagnostic reasoning between body parts, lesion attributes, and diseases. To provide a more robust evaluation of the proposed method, a large-scale clinical image dataset of skin diseases with significantly more cases than existing datasets has been established. Extensive experiments on three different datasets consistently demonstrate the state-of-the-art recognition performance of the proposed approach.
皮肤病是最普遍的健康问题之一,准确的计算机辅助诊断方法对皮肤科医生和患者都很重要。然而,现有的方法大多忽略了皮肤病诊断所需的基本领域知识。提出了一种新的多任务模型,即DermImitFormer,通过模仿皮肤科医生的诊断程序和策略来填补这一空白。该模型通过多任务学习,除预测疾病本身外,还能同时预测身体部位和病变属性,提高了诊断的准确性,提高了诊断的可解释性。设计的病变选择模块模仿皮肤科医生的放大动作,有效地突出局部病变特征从嘈杂的背景。此外,所提出的交叉交互模块明确建模了身体部位、病变属性和疾病之间复杂的诊断推理。为了对所提出的方法进行更稳健的评估,我们建立了一个大规模的皮肤病临床图像数据集,其病例数明显多于现有数据集。在三个不同的数据集上进行的大量实验一致地证明了所提出方法的最先进的识别性能。
{"title":"A Novel Multi-Task Model Imitating Dermatologists for Accurate Differential Diagnosis of Skin Diseases in Clinical Images","authors":"Yan Zhou, Wei Liu, Yuan Gao, Jingyi Xu, Lexian Lu, Yu Duan, Hao Cheng, Na Jin, Xiaoyong Man, Shuang Zhao, Yu Wang","doi":"10.48550/arXiv.2307.08308","DOIUrl":"https://doi.org/10.48550/arXiv.2307.08308","url":null,"abstract":"Skin diseases are among the most prevalent health issues, and accurate computer-aided diagnosis methods are of importance for both dermatologists and patients. However, most of the existing methods overlook the essential domain knowledge required for skin disease diagnosis. A novel multi-task model, namely DermImitFormer, is proposed to fill this gap by imitating dermatologists' diagnostic procedures and strategies. Through multi-task learning, the model simultaneously predicts body parts and lesion attributes in addition to the disease itself, enhancing diagnosis accuracy and improving diagnosis interpretability. The designed lesion selection module mimics dermatologists' zoom-in action, effectively highlighting the local lesion features from noisy backgrounds. Additionally, the presented cross-interaction module explicitly models the complicated diagnostic reasoning between body parts, lesion attributes, and diseases. To provide a more robust evaluation of the proposed method, a large-scale clinical image dataset of skin diseases with significantly more cases than existing datasets has been established. Extensive experiments on three different datasets consistently demonstrate the state-of-the-art recognition performance of the proposed approach.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"1 1","pages":"202-212"},"PeriodicalIF":0.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86967182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1