首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
EGE-UNet: an Efficient Group Enhanced UNet for skin lesion segmentation EGE-UNet:一种有效的组增强UNet皮肤病变分割方法
Jiacheng Ruan, Mingye Xie, Jingsheng Gao, Ting Liu, Yuzhuo Fu
Transformer and its variants have been widely used for medical image segmentation. However, the large number of parameter and computational load of these models make them unsuitable for mobile health applications. To address this issue, we propose a more efficient approach, the Efficient Group Enhanced UNet (EGE-UNet). We incorporate a Group multi-axis Hadamard Product Attention module (GHPA) and a Group Aggregation Bridge module (GAB) in a lightweight manner. The GHPA groups input features and performs Hadamard Product Attention mechanism (HPA) on different axes to extract pathological information from diverse perspectives. The GAB effectively fuses multi-scale information by grouping low-level features, high-level features, and a mask generated by the decoder at each stage. Comprehensive experiments on the ISIC2017 and ISIC2018 datasets demonstrate that EGE-UNet outperforms existing state-of-the-art methods. In short, compared to the TransFuse, our model achieves superior segmentation performance while reducing parameter and computation costs by 494x and 160x, respectively. Moreover, to our best knowledge, this is the first model with a parameter count limited to just 50KB. Our code is available at https://github.com/JCruan519/EGE-UNet.
变压器及其变体在医学图像分割中有着广泛的应用。然而,这些模型的大量参数和计算量使其不适合移动医疗应用。为了解决这个问题,我们提出了一种更有效的方法,即高效组增强UNet (EGE-UNet)。我们以轻量级的方式集成了一个组多轴Hadamard产品关注模块(GHPA)和一个组聚合桥模块(GAB)。GHPA对输入特征进行分组,并在不同轴上执行Hadamard产品注意机制(HPA),从不同角度提取病理信息。GAB通过分组低阶特征、高阶特征和解码器在每个阶段生成的掩码,有效地融合了多尺度信息。在ISIC2017和ISIC2018数据集上的综合实验表明,ge - unet优于现有的最先进的方法。简而言之,与TransFuse相比,我们的模型实现了卓越的分割性能,同时将参数和计算成本分别降低了494倍和160倍。此外,据我们所知,这是第一个参数计数限制为50KB的模型。我们的代码可在https://github.com/JCruan519/EGE-UNet上获得。
{"title":"EGE-UNet: an Efficient Group Enhanced UNet for skin lesion segmentation","authors":"Jiacheng Ruan, Mingye Xie, Jingsheng Gao, Ting Liu, Yuzhuo Fu","doi":"10.48550/arXiv.2307.08473","DOIUrl":"https://doi.org/10.48550/arXiv.2307.08473","url":null,"abstract":"Transformer and its variants have been widely used for medical image segmentation. However, the large number of parameter and computational load of these models make them unsuitable for mobile health applications. To address this issue, we propose a more efficient approach, the Efficient Group Enhanced UNet (EGE-UNet). We incorporate a Group multi-axis Hadamard Product Attention module (GHPA) and a Group Aggregation Bridge module (GAB) in a lightweight manner. The GHPA groups input features and performs Hadamard Product Attention mechanism (HPA) on different axes to extract pathological information from diverse perspectives. The GAB effectively fuses multi-scale information by grouping low-level features, high-level features, and a mask generated by the decoder at each stage. Comprehensive experiments on the ISIC2017 and ISIC2018 datasets demonstrate that EGE-UNet outperforms existing state-of-the-art methods. In short, compared to the TransFuse, our model achieves superior segmentation performance while reducing parameter and computation costs by 494x and 160x, respectively. Moreover, to our best knowledge, this is the first model with a parameter count limited to just 50KB. Our code is available at https://github.com/JCruan519/EGE-UNet.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"8 1","pages":"481-490"},"PeriodicalIF":0.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83414833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network LUCYD:一个功能驱动的Richardson-Lucy反卷积网络
Tomás Chobola, Gesine Müller, V. Dausmann, Anton Theileis, J. Taucher, J. Huisken, Tingying Peng
The process of acquiring microscopic images in life sciences often results in image degradation and corruption, characterised by the presence of noise and blur, which poses significant challenges in accurately analysing and interpreting the obtained data. This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images that combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network. By integrating the image formation process into a feature-driven restoration model, the proposed approach aims to enhance the quality of the restored images whilst reducing computational costs and maintaining a high degree of interpretability. Our results demonstrate that LUCYD outperforms the state-of-the-art methods in both synthetic and real microscopy images, achieving superior performance in terms of image quality and generalisability. We show that the model can handle various microscopy modalities and different imaging conditions by evaluating it on two different microscopy datasets, including volumetric widefield and light-sheet microscopy. Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images. Therefore, it can be a valuable tool for microscopy image restoration and can facilitate further research in various microscopy applications. We made the source code for the model accessible under https://github.com/ctom2/lucyd-deconvolution.
在生命科学中获取微观图像的过程通常会导致图像退化和损坏,其特征是存在噪声和模糊,这对准确分析和解释所获得的数据构成了重大挑战。LUCYD是一种体积显微图像恢复的新方法,它结合了Richardson-Lucy反卷积公式和由全卷积网络获得的深度特征融合。通过将图像形成过程集成到特征驱动的恢复模型中,该方法旨在提高恢复图像的质量,同时降低计算成本并保持高度的可解释性。我们的研究结果表明,LUCYD在合成和真实显微镜图像中都优于最先进的方法,在图像质量和通用性方面取得了卓越的性能。我们通过在两种不同的显微镜数据集(包括体积宽视场和光片显微镜)上对该模型进行评估,表明该模型可以处理各种显微镜模式和不同的成像条件。我们的实验表明,LUCYD可以显著提高显微镜图像的分辨率、对比度和整体质量。因此,它可以作为一种有价值的显微镜图像恢复工具,并可以促进在各种显微镜应用领域的进一步研究。我们在https://github.com/ctom2/lucyd-deconvolution下提供了模型的源代码。
{"title":"LUCYD: A Feature-Driven Richardson-Lucy Deconvolution Network","authors":"Tomás Chobola, Gesine Müller, V. Dausmann, Anton Theileis, J. Taucher, J. Huisken, Tingying Peng","doi":"10.48550/arXiv.2307.07998","DOIUrl":"https://doi.org/10.48550/arXiv.2307.07998","url":null,"abstract":"The process of acquiring microscopic images in life sciences often results in image degradation and corruption, characterised by the presence of noise and blur, which poses significant challenges in accurately analysing and interpreting the obtained data. This paper proposes LUCYD, a novel method for the restoration of volumetric microscopy images that combines the Richardson-Lucy deconvolution formula and the fusion of deep features obtained by a fully convolutional network. By integrating the image formation process into a feature-driven restoration model, the proposed approach aims to enhance the quality of the restored images whilst reducing computational costs and maintaining a high degree of interpretability. Our results demonstrate that LUCYD outperforms the state-of-the-art methods in both synthetic and real microscopy images, achieving superior performance in terms of image quality and generalisability. We show that the model can handle various microscopy modalities and different imaging conditions by evaluating it on two different microscopy datasets, including volumetric widefield and light-sheet microscopy. Our experiments indicate that LUCYD can significantly improve resolution, contrast, and overall quality of microscopy images. Therefore, it can be a valuable tool for microscopy image restoration and can facilitate further research in various microscopy applications. We made the source code for the model accessible under https://github.com/ctom2/lucyd-deconvolution.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"55 27 1","pages":"656-665"},"PeriodicalIF":0.0,"publicationDate":"2023-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88488510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Adaptive Region Selection for Active Learning in Whole Slide Image Semantic Segmentation 基于主动学习的全幻灯片图像语义分割自适应区域选择
Jingna Qiu, Frauke Wilm, Mathias Öttl, M. Schlereth, Chang Liu, T. Heimann, M. Aubreville, K. Breininger
The process of annotating histological gigapixel-sized whole slide images (WSIs) at the pixel level for the purpose of training a supervised segmentation model is time-consuming. Region-based active learning (AL) involves training the model on a limited number of annotated image regions instead of requesting annotations of the entire images. These annotation regions are iteratively selected, with the goal of optimizing model performance while minimizing the annotated area. The standard method for region selection evaluates the informativeness of all square regions of a specified size and then selects a specific quantity of the most informative regions. We find that the efficiency of this method highly depends on the choice of AL step size (i.e., the combination of region size and the number of selected regions per WSI), and a suboptimal AL step size can result in redundant annotation requests or inflated computation costs. This paper introduces a novel technique for selecting annotation regions adaptively, mitigating the reliance on this AL hyperparameter. Specifically, we dynamically determine each region by first identifying an informative area and then detecting its optimal bounding box, as opposed to selecting regions of a uniform predefined shape and size as in the standard method. We evaluate our method using the task of breast cancer metastases segmentation on the public CAMELYON16 dataset and show that it consistently achieves higher sampling efficiency than the standard method across various AL step sizes. With only 2.6% of tissue area annotated, we achieve full annotation performance and thereby substantially reduce the costs of annotating a WSI dataset. The source code is available at https://github.com/DeepMicroscopy/AdaptiveRegionSelection.
为了训练监督分割模型,在像素级上对组织学上的千兆像素大小的整张幻灯片图像(wsi)进行注释的过程非常耗时。基于区域的主动学习(AL)涉及在有限数量的注释图像区域上训练模型,而不是要求对整个图像进行注释。这些标注区域是迭代选择的,其目标是在最小化标注区域的同时优化模型性能。区域选择的标准方法是对指定大小的所有方形区域的信息量进行评估,然后选择特定数量的信息量最大的区域。我们发现该方法的效率高度依赖于人工智能步长(即区域大小和每个WSI选择的区域数量的组合)的选择,而次优的人工智能步长可能导致冗余的注释请求或膨胀的计算成本。本文介绍了一种自适应选择标注区域的新技术,减轻了对人工智能超参数的依赖。具体来说,我们通过首先识别一个信息区域,然后检测其最佳边界框来动态确定每个区域,而不是像标准方法那样选择统一的预定义形状和大小的区域。我们使用CAMELYON16公共数据集上的乳腺癌转移分割任务来评估我们的方法,并表明它在不同的人工智能步长上始终比标准方法获得更高的采样效率。我们只注释了2.6%的组织区域,就实现了完整的注释性能,从而大大降低了注释WSI数据集的成本。源代码可从https://github.com/DeepMicroscopy/AdaptiveRegionSelection获得。
{"title":"Adaptive Region Selection for Active Learning in Whole Slide Image Semantic Segmentation","authors":"Jingna Qiu, Frauke Wilm, Mathias Öttl, M. Schlereth, Chang Liu, T. Heimann, M. Aubreville, K. Breininger","doi":"10.48550/arXiv.2307.07168","DOIUrl":"https://doi.org/10.48550/arXiv.2307.07168","url":null,"abstract":"The process of annotating histological gigapixel-sized whole slide images (WSIs) at the pixel level for the purpose of training a supervised segmentation model is time-consuming. Region-based active learning (AL) involves training the model on a limited number of annotated image regions instead of requesting annotations of the entire images. These annotation regions are iteratively selected, with the goal of optimizing model performance while minimizing the annotated area. The standard method for region selection evaluates the informativeness of all square regions of a specified size and then selects a specific quantity of the most informative regions. We find that the efficiency of this method highly depends on the choice of AL step size (i.e., the combination of region size and the number of selected regions per WSI), and a suboptimal AL step size can result in redundant annotation requests or inflated computation costs. This paper introduces a novel technique for selecting annotation regions adaptively, mitigating the reliance on this AL hyperparameter. Specifically, we dynamically determine each region by first identifying an informative area and then detecting its optimal bounding box, as opposed to selecting regions of a uniform predefined shape and size as in the standard method. We evaluate our method using the task of breast cancer metastases segmentation on the public CAMELYON16 dataset and show that it consistently achieves higher sampling efficiency than the standard method across various AL step sizes. With only 2.6% of tissue area annotated, we achieve full annotation performance and thereby substantially reduce the costs of annotating a WSI dataset. The source code is available at https://github.com/DeepMicroscopy/AdaptiveRegionSelection.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"74 1","pages":"90-100"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72933758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
cOOpD: Reformulating COPD classification on chest CT scans as anomaly detection using contrastive representations COPD:重新制定COPD在胸部CT扫描上的分类,作为使用对比表征的异常检测
Sílvia D. Almeida, Carsten T. Lüth, T. Norajitra, T. Wald, M. Nolden, P. Jaeger, C. Heussel, J. Biederer, O. Weinheimer, K. Maier-Hein
Classification of heterogeneous diseases is challenging due to their complexity, variability of symptoms and imaging findings. Chronic Obstructive Pulmonary Disease (COPD) is a prime example, being underdiagnosed despite being the third leading cause of death. Its sparse, diffuse and heterogeneous appearance on computed tomography challenges supervised binary classification. We reformulate COPD binary classification as an anomaly detection task, proposing cOOpD: heterogeneous pathological regions are detected as Out-of-Distribution (OOD) from normal homogeneous lung regions. To this end, we learn representations of unlabeled lung regions employing a self-supervised contrastive pretext model, potentially capturing specific characteristics of diseased and healthy unlabeled regions. A generative model then learns the distribution of healthy representations and identifies abnormalities (stemming from COPD) as deviations. Patient-level scores are obtained by aggregating region OOD scores. We show that cOOpD achieves the best performance on two public datasets, with an increase of 8.2% and 7.7% in terms of AUROC compared to the previous supervised state-of-the-art. Additionally, cOOpD yields well-interpretable spatial anomaly maps and patient-level scores which we show to be of additional value in identifying individuals in the early stage of progression. Experiments in artificially designed real-world prevalence settings further support that anomaly detection is a powerful way of tackling COPD classification.
异质性疾病的分类是具有挑战性的,由于其复杂性,变异性的症状和影像学表现。慢性阻塞性肺疾病(COPD)就是一个典型的例子,尽管它是第三大死亡原因,但仍未得到充分诊断。它在计算机断层扫描上的稀疏、弥散和异质性表现对监督二分类提出了挑战。我们将COPD二元分类重新定义为异常检测任务,提出COPD:异质性病理区域被检测为来自正常均匀肺区域的out - distribution (OOD)。为此,我们使用自我监督对比借口模型学习未标记肺区域的表示,潜在地捕获患病和健康未标记区域的特定特征。然后生成模型学习健康表征的分布,并将异常(源于COPD)识别为偏差。患者水平评分是通过汇总地区OOD评分获得的。我们表明,cOOpD在两个公共数据集上实现了最佳性能,与之前的监督技术相比,在AUROC方面增加了8.2%和7.7%。此外,cOOpD产生了可很好解释的空间异常图和患者水平评分,我们认为这在识别早期进展阶段的个体方面具有附加价值。在人为设计的真实世界患病率环境中进行的实验进一步支持异常检测是解决COPD分类的有力方法。
{"title":"cOOpD: Reformulating COPD classification on chest CT scans as anomaly detection using contrastive representations","authors":"Sílvia D. Almeida, Carsten T. Lüth, T. Norajitra, T. Wald, M. Nolden, P. Jaeger, C. Heussel, J. Biederer, O. Weinheimer, K. Maier-Hein","doi":"10.48550/arXiv.2307.07254","DOIUrl":"https://doi.org/10.48550/arXiv.2307.07254","url":null,"abstract":"Classification of heterogeneous diseases is challenging due to their complexity, variability of symptoms and imaging findings. Chronic Obstructive Pulmonary Disease (COPD) is a prime example, being underdiagnosed despite being the third leading cause of death. Its sparse, diffuse and heterogeneous appearance on computed tomography challenges supervised binary classification. We reformulate COPD binary classification as an anomaly detection task, proposing cOOpD: heterogeneous pathological regions are detected as Out-of-Distribution (OOD) from normal homogeneous lung regions. To this end, we learn representations of unlabeled lung regions employing a self-supervised contrastive pretext model, potentially capturing specific characteristics of diseased and healthy unlabeled regions. A generative model then learns the distribution of healthy representations and identifies abnormalities (stemming from COPD) as deviations. Patient-level scores are obtained by aggregating region OOD scores. We show that cOOpD achieves the best performance on two public datasets, with an increase of 8.2% and 7.7% in terms of AUROC compared to the previous supervised state-of-the-art. Additionally, cOOpD yields well-interpretable spatial anomaly maps and patient-level scores which we show to be of additional value in identifying individuals in the early stage of progression. Experiments in artificially designed real-world prevalence settings further support that anomaly detection is a powerful way of tackling COPD classification.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"20 1","pages":"33-43"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81699235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConTrack: Contextual Transformer for Device Tracking in X-ray ConTrack:用于x射线设备跟踪的上下文转换器
Marc Demoustier, Yue Zhang, V. N. Murthy, Florin C. Ghesu, D. Comaniciu
Device tracking is an important prerequisite for guidance during endovascular procedures. Especially during cardiac interventions, detection and tracking of guiding the catheter tip in 2D fluoroscopic images is important for applications such as mapping vessels from angiography (high dose with contrast) to fluoroscopy (low dose without contrast). Tracking the catheter tip poses different challenges: the tip can be occluded by contrast during angiography or interventional devices; and it is always in continuous movement due to the cardiac and respiratory motions. To overcome these challenges, we propose ConTrack, a transformer-based network that uses both spatial and temporal contextual information for accurate device detection and tracking in both X-ray fluoroscopy and angiography. The spatial information comes from the template frames and the segmentation module: the template frames define the surroundings of the device, whereas the segmentation module detects the entire device to bring more context for the tip prediction. Using multiple templates makes the model more robust to the change in appearance of the device when it is occluded by the contrast agent. The flow information computed on the segmented catheter mask between the current and the previous frame helps in further refining the prediction by compensating for the respiratory and cardiac motions. The experiments show that our method achieves 45% or higher accuracy in detection and tracking when compared to state-of-the-art tracking models.
设备跟踪是血管内手术指导的重要前提。特别是在心脏干预期间,在二维透视图像中检测和跟踪引导导管尖端对于从血管造影(高剂量对比)到透视(低剂量无对比)的血管测绘等应用非常重要。追踪导管尖端有不同的挑战:在血管造影术或介入装置期间,尖端可能被造影术阻塞;由于心脏和呼吸的运动,它总是处于连续的运动中。为了克服这些挑战,我们提出了ConTrack,这是一个基于变压器的网络,它利用空间和时间背景信息在x射线透视和血管造影中进行准确的设备检测和跟踪。空间信息来自模板框架和分割模块:模板框架定义设备的周围环境,而分割模块检测整个设备,为提示预测提供更多的上下文。使用多个模板使模型更健壮,当设备被造影剂遮挡时,它的外观变化。在当前和前一帧之间的分段导管面罩上计算的流量信息有助于通过补偿呼吸和心脏运动进一步改进预测。实验表明,与最先进的跟踪模型相比,我们的方法在检测和跟踪方面达到了45%或更高的精度。
{"title":"ConTrack: Contextual Transformer for Device Tracking in X-ray","authors":"Marc Demoustier, Yue Zhang, V. N. Murthy, Florin C. Ghesu, D. Comaniciu","doi":"10.48550/arXiv.2307.07541","DOIUrl":"https://doi.org/10.48550/arXiv.2307.07541","url":null,"abstract":"Device tracking is an important prerequisite for guidance during endovascular procedures. Especially during cardiac interventions, detection and tracking of guiding the catheter tip in 2D fluoroscopic images is important for applications such as mapping vessels from angiography (high dose with contrast) to fluoroscopy (low dose without contrast). Tracking the catheter tip poses different challenges: the tip can be occluded by contrast during angiography or interventional devices; and it is always in continuous movement due to the cardiac and respiratory motions. To overcome these challenges, we propose ConTrack, a transformer-based network that uses both spatial and temporal contextual information for accurate device detection and tracking in both X-ray fluoroscopy and angiography. The spatial information comes from the template frames and the segmentation module: the template frames define the surroundings of the device, whereas the segmentation module detects the entire device to bring more context for the tip prediction. Using multiple templates makes the model more robust to the change in appearance of the device when it is occluded by the contrast agent. The flow information computed on the segmented catheter mask between the current and the previous frame helps in further refining the prediction by compensating for the respiratory and cardiac motions. The experiments show that our method achieves 45% or higher accuracy in detection and tracking when compared to state-of-the-art tracking models.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"44 1","pages":"679-688"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76704601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Source-Free Domain Adaptive Fundus Image Segmentation with Class-Balanced Mean Teacher 班级平衡均值教师的无源域自适应眼底图像分割
Longxiang Tang, Kai Li, Chunming He, Yulun Zhang, Xiu Li
This paper studies source-free domain adaptive fundus image segmentation which aims to adapt a pretrained fundus segmentation model to a target domain using unlabeled images. This is a challenging task because it is highly risky to adapt a model only using unlabeled data. Most existing methods tackle this task mainly by designing techniques to carefully generate pseudo labels from the model's predictions and use the pseudo labels to train the model. While often obtaining positive adaption effects, these methods suffer from two major issues. First, they tend to be fairly unstable - incorrect pseudo labels abruptly emerged may cause a catastrophic impact on the model. Second, they fail to consider the severe class imbalance of fundus images where the foreground (e.g., cup) region is usually very small. This paper aims to address these two issues by proposing the Class-Balanced Mean Teacher (CBMT) model. CBMT addresses the unstable issue by proposing a weak-strong augmented mean teacher learning scheme where only the teacher model generates pseudo labels from weakly augmented images to train a student model that takes strongly augmented images as input. The teacher is updated as the moving average of the instantly trained student, which could be noisy. This prevents the teacher model from being abruptly impacted by incorrect pseudo-labels. For the class imbalance issue, CBMT proposes a novel loss calibration approach to highlight foreground classes according to global statistics. Experiments show that CBMT well addresses these two issues and outperforms existing methods on multiple benchmarks.
本文研究了无源域自适应眼底图像分割,目的是利用无标记图像将预先训练好的眼底分割模型适应目标域。这是一项具有挑战性的任务,因为仅使用未标记的数据来调整模型是非常危险的。大多数现有方法主要通过设计技术来解决这个任务,从模型的预测中仔细生成伪标签,并使用伪标签来训练模型。虽然这些方法往往获得积极的适应效果,但它们存在两个主要问题。首先,它们往往相当不稳定——突然出现的不正确的伪标签可能会对模型造成灾难性的影响。其次,他们没有考虑眼底图像严重的类不平衡,其中前景(例如杯)区域通常非常小。本文旨在通过提出班级平衡平均教师(CBMT)模型来解决这两个问题。CBMT通过提出弱-强增广平均教师学习方案来解决不稳定问题,其中只有教师模型从弱增广图像生成伪标签来训练以强增广图像作为输入的学生模型。教师被更新为瞬时训练的学生的移动平均,这可能是嘈杂的。这可以防止教师模型突然受到不正确的伪标签的影响。针对类不平衡问题,CBMT提出了一种基于全局统计的损失校准方法来突出前景类。实验表明,CBMT很好地解决了这两个问题,并且在多个基准测试中优于现有方法。
{"title":"Source-Free Domain Adaptive Fundus Image Segmentation with Class-Balanced Mean Teacher","authors":"Longxiang Tang, Kai Li, Chunming He, Yulun Zhang, Xiu Li","doi":"10.48550/arXiv.2307.09973","DOIUrl":"https://doi.org/10.48550/arXiv.2307.09973","url":null,"abstract":"This paper studies source-free domain adaptive fundus image segmentation which aims to adapt a pretrained fundus segmentation model to a target domain using unlabeled images. This is a challenging task because it is highly risky to adapt a model only using unlabeled data. Most existing methods tackle this task mainly by designing techniques to carefully generate pseudo labels from the model's predictions and use the pseudo labels to train the model. While often obtaining positive adaption effects, these methods suffer from two major issues. First, they tend to be fairly unstable - incorrect pseudo labels abruptly emerged may cause a catastrophic impact on the model. Second, they fail to consider the severe class imbalance of fundus images where the foreground (e.g., cup) region is usually very small. This paper aims to address these two issues by proposing the Class-Balanced Mean Teacher (CBMT) model. CBMT addresses the unstable issue by proposing a weak-strong augmented mean teacher learning scheme where only the teacher model generates pseudo labels from weakly augmented images to train a student model that takes strongly augmented images as input. The teacher is updated as the moving average of the instantly trained student, which could be noisy. This prevents the teacher model from being abruptly impacted by incorrect pseudo-labels. For the class imbalance issue, CBMT proposes a novel loss calibration approach to highlight foreground classes according to global statistics. Experiments show that CBMT well addresses these two issues and outperforms existing methods on multiple benchmarks.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"191 1","pages":"684-694"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74780102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Transformer-based end-to-end classification of variable-length volumetric data 基于变压器的端到端变长体积数据分类
Marzieh Oghbaie, Teresa Araújo, T. Emre, U. Schmidt-Erfurth, H. Bogunović
The automatic classification of 3D medical data is memory-intensive. Also, variations in the number of slices between samples is common. Na"ive solutions such as subsampling can solve these problems, but at the cost of potentially eliminating relevant diagnosis information. Transformers have shown promising performance for sequential data analysis. However, their application for long sequences is data, computationally, and memory demanding. In this paper, we propose an end-to-end Transformer-based framework that allows to classify volumetric data of variable length in an efficient fashion. Particularly, by randomizing the input volume-wise resolution(#slices) during training, we enhance the capacity of the learnable positional embedding assigned to each volume slice. Consequently, the accumulated positional information in each positional embedding can be generalized to the neighbouring slices, even for high-resolution volumes at the test time. By doing so, the model will be more robust to variable volume length and amenable to different computational budgets. We evaluated the proposed approach in retinal OCT volume classification and achieved 21.96% average improvement in balanced accuracy on a 9-class diagnostic task, compared to state-of-the-art video transformers. Our findings show that varying the volume-wise resolution of the input during training results in more informative volume representation as compared to training with fixed number of slices per volume.
3D医疗数据的自动分类是内存密集型的。此外,样本之间切片数量的变化是常见的。诸如子采样之类的低级解决方案可以解决这些问题,但代价是可能会消除相关的诊断信息。变压器在序列数据分析方面表现出了良好的性能。然而,它们对于长序列的应用对数据、计算和内存的要求很高。在本文中,我们提出了一个端到端基于变压器的框架,该框架允许以有效的方式对可变长度的体积数据进行分类。特别是,通过在训练过程中随机化输入体分辨率(#slices),我们增强了分配给每个体片的可学习位置嵌入的容量。因此,在每个位置嵌入中积累的位置信息可以推广到邻近的切片,即使在测试时对于高分辨率的体积也是如此。通过这样做,模型将对可变体积长度和不同的计算预算具有更强的鲁棒性。我们在视网膜OCT体积分类中评估了所提出的方法,与最先进的视频变压器相比,在9级诊断任务中,平衡精度平均提高了21.96%。我们的研究结果表明,在训练期间改变输入的体积方向分辨率,与每个体积固定数量的切片训练相比,可以获得更有信息的体积表示。
{"title":"Transformer-based end-to-end classification of variable-length volumetric data","authors":"Marzieh Oghbaie, Teresa Araújo, T. Emre, U. Schmidt-Erfurth, H. Bogunović","doi":"10.48550/arXiv.2307.06666","DOIUrl":"https://doi.org/10.48550/arXiv.2307.06666","url":null,"abstract":"The automatic classification of 3D medical data is memory-intensive. Also, variations in the number of slices between samples is common. Na\"ive solutions such as subsampling can solve these problems, but at the cost of potentially eliminating relevant diagnosis information. Transformers have shown promising performance for sequential data analysis. However, their application for long sequences is data, computationally, and memory demanding. In this paper, we propose an end-to-end Transformer-based framework that allows to classify volumetric data of variable length in an efficient fashion. Particularly, by randomizing the input volume-wise resolution(#slices) during training, we enhance the capacity of the learnable positional embedding assigned to each volume slice. Consequently, the accumulated positional information in each positional embedding can be generalized to the neighbouring slices, even for high-resolution volumes at the test time. By doing so, the model will be more robust to variable volume length and amenable to different computational budgets. We evaluated the proposed approach in retinal OCT volume classification and achieved 21.96% average improvement in balanced accuracy on a 9-class diagnostic task, compared to state-of-the-art video transformers. Our findings show that varying the volume-wise resolution of the input during training results in more informative volume representation as compared to training with fixed number of slices per volume.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"75 1","pages":"358-367"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72819394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification CellGAN:条件宫颈细胞合成增强细胞病理图像分类
Zhenrong Shen, Mao-Hong Cao, Sheng Wang, Lichi Zhang, Qian Wang
Automatic examination of thin-prep cytologic test (TCT) slides can assist pathologists in finding cervical abnormality for accurate and efficient cancer screening. Current solutions mostly need to localize suspicious cells and classify abnormality based on local patches, concerning the fact that whole slide images of TCT are extremely large. It thus requires many annotations of normal and abnormal cervical cells, to supervise the training of the patch-level classifier for promising performance. In this paper, we propose CellGAN to synthesize cytopathological images of various cervical cell types for augmenting patch-level cell classification. Built upon a lightweight backbone, CellGAN is equipped with a non-linear class mapping network to effectively incorporate cell type information into image generation. We also propose the Skip-layer Global Context module to model the complex spatial relationship of the cells, and attain high fidelity of the synthesized images through adversarial learning. Our experiments demonstrate that CellGAN can produce visually plausible TCT cytopathological images for different cell types. We also validate the effectiveness of using CellGAN to greatly augment patch-level cell classification performance.
自动检查薄层细胞学检查(TCT)玻片可以帮助病理学家发现宫颈异常准确和有效的癌症筛查。目前的解决方案大多需要定位可疑细胞,并基于局部斑块对异常进行分类,因为TCT的整个幻灯片图像非常大。因此,它需要对正常和异常的宫颈细胞进行许多注释,以监督补丁级分类器的训练,以获得有希望的性能。在本文中,我们提出CellGAN来合成各种宫颈细胞类型的细胞病理图像,以增强斑块水平的细胞分类。CellGAN基于轻量级主干,配备非线性类映射网络,有效地将细胞类型信息整合到图像生成中。我们还提出了Skip-layer Global Context模块来模拟细胞之间复杂的空间关系,并通过对抗性学习来获得高保真的合成图像。我们的实验表明,CellGAN可以为不同的细胞类型产生视觉上可信的TCT细胞病理图像。我们还验证了使用CellGAN大大增强斑块级细胞分类性能的有效性。
{"title":"CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification","authors":"Zhenrong Shen, Mao-Hong Cao, Sheng Wang, Lichi Zhang, Qian Wang","doi":"10.48550/arXiv.2307.06182","DOIUrl":"https://doi.org/10.48550/arXiv.2307.06182","url":null,"abstract":"Automatic examination of thin-prep cytologic test (TCT) slides can assist pathologists in finding cervical abnormality for accurate and efficient cancer screening. Current solutions mostly need to localize suspicious cells and classify abnormality based on local patches, concerning the fact that whole slide images of TCT are extremely large. It thus requires many annotations of normal and abnormal cervical cells, to supervise the training of the patch-level classifier for promising performance. In this paper, we propose CellGAN to synthesize cytopathological images of various cervical cell types for augmenting patch-level cell classification. Built upon a lightweight backbone, CellGAN is equipped with a non-linear class mapping network to effectively incorporate cell type information into image generation. We also propose the Skip-layer Global Context module to model the complex spatial relationship of the cells, and attain high fidelity of the synthesized images through adversarial learning. Our experiments demonstrate that CellGAN can produce visually plausible TCT cytopathological images for different cell types. We also validate the effectiveness of using CellGAN to greatly augment patch-level cell classification performance.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"86 1","pages":"487-496"},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76346384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rectifying Noisy Labels with Sequential Prior: Multi-Scale Temporal Feature Affinity Learning for Robust Video Segmentation 用顺序先验校正噪声标签:多尺度时间特征亲和学习鲁棒视频分割
Beilei Cui, Minqing Zhang, Mengya Xu, An-Chi Wang, Wu Yuan, Hongliang Ren
Noisy label problems are inevitably in existence within medical image segmentation causing severe performance degradation. Previous segmentation methods for noisy label problems only utilize a single image while the potential of leveraging the correlation between images has been overlooked. Especially for video segmentation, adjacent frames contain rich contextual information beneficial in cognizing noisy labels. Based on two insights, we propose a Multi-Scale Temporal Feature Affinity Learning (MS-TFAL) framework to resolve noisy-labeled medical video segmentation issues. First, we argue the sequential prior of videos is an effective reference, i.e., pixel-level features from adjacent frames are close in distance for the same class and far in distance otherwise. Therefore, Temporal Feature Affinity Learning (TFAL) is devised to indicate possible noisy labels by evaluating the affinity between pixels in two adjacent frames. We also notice that the noise distribution exhibits considerable variations across video, image, and pixel levels. In this way, we introduce Multi-Scale Supervision (MSS) to supervise the network from three different perspectives by re-weighting and refining the samples. This design enables the network to concentrate on clean samples in a coarse-to-fine manner. Experiments with both synthetic and real-world label noise demonstrate that our method outperforms recent state-of-the-art robust segmentation approaches. Code is available at https://github.com/BeileiCui/MS-TFAL.
医学图像分割中不可避免地存在噪声标签问题,导致分割性能严重下降。以往针对噪声标签问题的分割方法仅利用单个图像,而忽略了利用图像之间相关性的潜力。特别是在视频分割中,相邻帧包含丰富的上下文信息,有利于识别噪声标签。基于这两个见解,我们提出了一个多尺度时间特征亲和力学习(ms - tal)框架来解决噪声标记的医学视频分割问题。首先,我们认为视频的顺序先验是一个有效的参考,即来自相邻帧的像素级特征对于同一类来说距离较近,否则距离较远。因此,设计了时间特征亲和学习(TFAL),通过评估相邻两帧中像素之间的亲和性来指示可能的噪声标签。我们还注意到,噪声分布在视频、图像和像素级别上表现出相当大的变化。为此,我们引入多尺度监督(MSS),通过对样本的重新加权和细化,从三个不同的角度对网络进行监督。这种设计使网络能够以从粗到细的方式集中在干净的样本上。合成和现实世界标签噪声的实验表明,我们的方法优于最近最先进的鲁棒分割方法。代码可从https://github.com/BeileiCui/MS-TFAL获得。
{"title":"Rectifying Noisy Labels with Sequential Prior: Multi-Scale Temporal Feature Affinity Learning for Robust Video Segmentation","authors":"Beilei Cui, Minqing Zhang, Mengya Xu, An-Chi Wang, Wu Yuan, Hongliang Ren","doi":"10.48550/arXiv.2307.05898","DOIUrl":"https://doi.org/10.48550/arXiv.2307.05898","url":null,"abstract":"Noisy label problems are inevitably in existence within medical image segmentation causing severe performance degradation. Previous segmentation methods for noisy label problems only utilize a single image while the potential of leveraging the correlation between images has been overlooked. Especially for video segmentation, adjacent frames contain rich contextual information beneficial in cognizing noisy labels. Based on two insights, we propose a Multi-Scale Temporal Feature Affinity Learning (MS-TFAL) framework to resolve noisy-labeled medical video segmentation issues. First, we argue the sequential prior of videos is an effective reference, i.e., pixel-level features from adjacent frames are close in distance for the same class and far in distance otherwise. Therefore, Temporal Feature Affinity Learning (TFAL) is devised to indicate possible noisy labels by evaluating the affinity between pixels in two adjacent frames. We also notice that the noise distribution exhibits considerable variations across video, image, and pixel levels. In this way, we introduce Multi-Scale Supervision (MSS) to supervise the network from three different perspectives by re-weighting and refining the samples. This design enables the network to concentrate on clean samples in a coarse-to-fine manner. Experiments with both synthetic and real-world label noise demonstrate that our method outperforms recent state-of-the-art robust segmentation approaches. Code is available at https://github.com/BeileiCui/MS-TFAL.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"53 93 1","pages":"90-100"},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78353294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation-Aware Mutual Learning for Semi-supervised Medical Image Segmentation 基于关联感知的半监督医学图像分割相互学习
Shengbo Gao, Zijia Zhang, Jiechao Ma, Zilong Li, Shu Zhang
Semi-supervised learning has become increasingly popular in medical image segmentation due to its ability to leverage large amounts of unlabeled data to extract additional information. However, most existing semi-supervised segmentation methods only focus on extracting information from unlabeled data, disregarding the potential of labeled data to further improve the performance of the model. In this paper, we propose a novel Correlation Aware Mutual Learning (CAML) framework that leverages labeled data to guide the extraction of information from unlabeled data. Our approach is based on a mutual learning strategy that incorporates two modules: the Cross-sample Mutual Attention Module (CMA) and the Omni-Correlation Consistency Module (OCC). The CMA module establishes dense cross-sample correlations among a group of samples, enabling the transfer of label prior knowledge to unlabeled data. The OCC module constructs omni-correlations between the unlabeled and labeled datasets and regularizes dual models by constraining the omni-correlation matrix of each sub-model to be consistent. Experiments on the Atrial Segmentation Challenge dataset demonstrate that our proposed approach outperforms state-of-the-art methods, highlighting the effectiveness of our framework in medical image segmentation tasks. The codes, pre-trained weights, and data are publicly available.
半监督学习在医学图像分割中越来越受欢迎,因为它能够利用大量未标记的数据来提取额外的信息。然而,大多数现有的半监督分割方法只关注于从未标记的数据中提取信息,而忽略了标记数据进一步提高模型性能的潜力。在本文中,我们提出了一种新的关联感知互学习(CAML)框架,该框架利用标记数据来指导从未标记数据中提取信息。我们的方法基于相互学习策略,该策略包含两个模块:跨样本相互关注模块(CMA)和全相关一致性模块(OCC)。CMA模块在一组样本之间建立密集的跨样本相关性,使标签先验知识能够转移到未标记的数据。OCC模块在未标记和标记数据集之间构建全相关关系,并通过约束各子模型的全相关矩阵一致来正则化对偶模型。心房分割挑战数据集的实验表明,我们提出的方法优于最先进的方法,突出了我们的框架在医学图像分割任务中的有效性。代码、预训练的权重和数据都是公开的。
{"title":"Correlation-Aware Mutual Learning for Semi-supervised Medical Image Segmentation","authors":"Shengbo Gao, Zijia Zhang, Jiechao Ma, Zilong Li, Shu Zhang","doi":"10.48550/arXiv.2307.06312","DOIUrl":"https://doi.org/10.48550/arXiv.2307.06312","url":null,"abstract":"Semi-supervised learning has become increasingly popular in medical image segmentation due to its ability to leverage large amounts of unlabeled data to extract additional information. However, most existing semi-supervised segmentation methods only focus on extracting information from unlabeled data, disregarding the potential of labeled data to further improve the performance of the model. In this paper, we propose a novel Correlation Aware Mutual Learning (CAML) framework that leverages labeled data to guide the extraction of information from unlabeled data. Our approach is based on a mutual learning strategy that incorporates two modules: the Cross-sample Mutual Attention Module (CMA) and the Omni-Correlation Consistency Module (OCC). The CMA module establishes dense cross-sample correlations among a group of samples, enabling the transfer of label prior knowledge to unlabeled data. The OCC module constructs omni-correlations between the unlabeled and labeled datasets and regularizes dual models by constraining the omni-correlation matrix of each sub-model to be consistent. Experiments on the Atrial Segmentation Challenge dataset demonstrate that our proposed approach outperforms state-of-the-art methods, highlighting the effectiveness of our framework in medical image segmentation tasks. The codes, pre-trained weights, and data are publicly available.","PeriodicalId":18289,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"295 1","pages":"98-108"},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75421168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1