首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
GAF-Net: A new automated segmentation method based on multiscale feature fusion and feedback module GAF-Net:一种基于多尺度特征融合和反馈模块的自动分割方法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patrec.2024.11.025
Long Wen , Yuxing Ye , Lei Zuo
Surface defect detection (SDD) is the necessary technique to monitor the surface quality of production. However, fine grain defects caused by stress loading, environmental influences, and construction defects is still a challenge to detect. In this research, the convolutional neural network for crack segmentation is developed based on the feature fusion and feedback on the global features and multi-scale feature (GAF-Net). First, a multi-scale feature feedback module (MSFF) is proposed, which uses four different scales to refine local features by fusing high-level and sub-high-level features to perform feedback correction. Secondly, the global feature module (GF) is proposed to generate a fine global information map using local features and adaptive weighted fusion with the correction map for crack detection. Finally, the GAF-Net network with multi-level feature maps is deeply supervised to accelerate GAF-Net and improve the detection accuracy. GAF-Net is trained and experimented on three publicly available pavement crack datasets, and the results show that GAF-Net achieves state-of-the-art results in the IoU segmentation metrics when compared to other deep learning methods (Crackforest: 53.61 %; Crack500: 65.19 %; DeepCrack: 81.63 %).
表面缺陷检测(SDD)是监控生产表面质量的必要技术。然而,由应力载荷、环境影响和施工缺陷引起的细粒缺陷检测仍然是一个挑战。在本研究中,基于全局特征和多尺度特征的特征融合和反馈(GAF-Net),开发了用于裂缝分割的卷积神经网络。首先,提出了一种多尺度特征反馈模块(MSFF),该模块采用四种不同尺度对局部特征进行细化,融合高级特征和次高级特征进行反馈校正;其次,提出了全局特征模块(GF),利用局部特征和自适应加权融合与校正图生成精细的全局信息图,用于裂纹检测;最后,对具有多层次特征映射的GAF-Net网络进行深度监督,以加快GAF-Net的速度,提高检测精度。在三个公开的路面裂缝数据集上对GAF-Net进行了训练和实验,结果表明,与其他深度学习方法相比,GAF-Net在IoU分割指标上取得了最先进的结果(Crackforest: 53.61%;Crack500: 65.19%;DeepCrack: 81.63%)。
{"title":"GAF-Net: A new automated segmentation method based on multiscale feature fusion and feedback module","authors":"Long Wen ,&nbsp;Yuxing Ye ,&nbsp;Lei Zuo","doi":"10.1016/j.patrec.2024.11.025","DOIUrl":"10.1016/j.patrec.2024.11.025","url":null,"abstract":"<div><div>Surface defect detection (SDD) is the necessary technique to monitor the surface quality of production. However, fine grain defects caused by stress loading, environmental influences, and construction defects is still a challenge to detect. In this research, the convolutional neural network for crack segmentation is developed based on the feature fusion and feedback on the global features and multi-scale feature (GAF-Net). First, a multi-scale feature feedback module (MSFF) is proposed, which uses four different scales to refine local features by fusing high-level and sub-high-level features to perform feedback correction. Secondly, the global feature module (GF) is proposed to generate a fine global information map using local features and adaptive weighted fusion with the correction map for crack detection. Finally, the GAF-Net network with multi-level feature maps is deeply supervised to accelerate GAF-Net and improve the detection accuracy. GAF-Net is trained and experimented on three publicly available pavement crack datasets, and the results show that GAF-Net achieves state-of-the-art results in the IoU segmentation metrics when compared to other deep learning methods (Crackforest: 53.61 %; Crack500: 65.19 %; DeepCrack: 81.63 %).</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 86-92"},"PeriodicalIF":3.9,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bilateral symmetry-based augmentation method for improved tooth segmentation in panoramic X-rays 基于双侧对称增强的全景x射线牙齿分割改进方法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patrec.2024.11.023
Sanket Wathore, Subrahmanyam Gorthi
Panoramic X-rays are crucial in dental radiology, providing detailed images that are essential for diagnosing and planning treatment for various oral conditions. The advent of automated methods that learn from annotated data promises to significantly aid clinical experts in making accurate diagnoses. However, these methods often require large amounts of annotated data, making the generation of high-quality annotations for panoramic X-rays both challenging and time-consuming. This paper introduces a novel bilateral symmetry-based augmentation method specifically designed to enhance tooth segmentation in panoramic X-rays. By exploiting the inherent bilateral symmetry of these images, our proposed method systematically generates augmented data, leading to substantial improvements in the performance of tooth segmentation models. By increasing the training data size fourfold, our approach proportionately reduces the effort required to manually annotate extensive datasets. These findings highlight the potential of leveraging the symmetrical properties of medical images to enhance model performance and accuracy in dental radiology. The effectiveness of the proposed method is evaluated on three widely adopted deep learning models: U-Net, SE U-Net, and TransUNet. Significant improvements in segmentation accuracy are observed with the proposed augmentation method across all models. For example, the average Dice Similarity Coefficient (DSC) increases by over 8%, reaching 76.7% for TransUNet. Further, comparisons with existing augmentation methods, including rigid transform-based and elastic grid-based techniques, show that the proposed method consistently outperforms them with additional improvements up to 5% in terms of average DSC, with the exact improvement varying depending on the model and training dataset size. We have made the data augmentation codes and tools developed based on our method available at https://github.com/wathoresanket/bilateralsymmetrybasedaugmentation.
全景x光在牙科放射学中是至关重要的,它提供了对各种口腔疾病的诊断和计划治疗至关重要的详细图像。从带注释的数据中学习的自动化方法的出现有望极大地帮助临床专家做出准确的诊断。然而,这些方法通常需要大量的注释数据,使得为全景x射线生成高质量的注释既具有挑战性又耗时。本文介绍了一种新的基于双侧对称的增强方法,专门用于增强全景x射线中的牙齿分割。通过利用这些图像固有的双边对称性,我们提出的方法系统地生成增强数据,从而大大提高了牙齿分割模型的性能。通过将训练数据大小增加四倍,我们的方法按比例减少了手动注释大量数据集所需的工作量。这些发现突出了利用医学图像的对称特性来提高牙科放射学模型性能和准确性的潜力。该方法的有效性在三种广泛采用的深度学习模型上进行了评估:U-Net、SE U-Net和TransUNet。在所有模型中,所提出的增强方法都显著提高了分割精度。例如,平均骰子相似系数(DSC)增加了8%以上,TransUNet达到76.7%。此外,与现有的增强方法(包括基于刚性变换和基于弹性网格的技术)的比较表明,所提出的方法在平均DSC方面的额外改进始终优于它们,最高可达5%,具体改进取决于模型和训练数据集的大小。我们已经在https://github.com/wathoresanket/bilateralsymmetrybasedaugmentation上提供了基于我们的方法开发的数据增强代码和工具。
{"title":"Bilateral symmetry-based augmentation method for improved tooth segmentation in panoramic X-rays","authors":"Sanket Wathore,&nbsp;Subrahmanyam Gorthi","doi":"10.1016/j.patrec.2024.11.023","DOIUrl":"10.1016/j.patrec.2024.11.023","url":null,"abstract":"<div><div>Panoramic X-rays are crucial in dental radiology, providing detailed images that are essential for diagnosing and planning treatment for various oral conditions. The advent of automated methods that learn from annotated data promises to significantly aid clinical experts in making accurate diagnoses. However, these methods often require large amounts of annotated data, making the generation of high-quality annotations for panoramic X-rays both challenging and time-consuming. This paper introduces a novel bilateral symmetry-based augmentation method specifically designed to enhance tooth segmentation in panoramic X-rays. By exploiting the inherent bilateral symmetry of these images, our proposed method systematically generates augmented data, leading to substantial improvements in the performance of tooth segmentation models. By increasing the training data size fourfold, our approach proportionately reduces the effort required to manually annotate extensive datasets. These findings highlight the potential of leveraging the symmetrical properties of medical images to enhance model performance and accuracy in dental radiology. The effectiveness of the proposed method is evaluated on three widely adopted deep learning models: U-Net, SE U-Net, and TransUNet. Significant improvements in segmentation accuracy are observed with the proposed augmentation method across all models. For example, the average Dice Similarity Coefficient (DSC) increases by over 8%, reaching 76.7% for TransUNet. Further, comparisons with existing augmentation methods, including rigid transform-based and elastic grid-based techniques, show that the proposed method consistently outperforms them with additional improvements up to 5% in terms of average DSC, with the exact improvement varying depending on the model and training dataset size. We have made the data augmentation codes and tools developed based on our method available at <span><span>https://github.com/wathoresanket/bilateralsymmetrybasedaugmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 1-7"},"PeriodicalIF":3.9,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142742880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of MRI tumors and pelvic anatomy via cGAN-synthesized data and attention-enhanced U-Net 通过cgan合成数据和注意增强U-Net分割MRI肿瘤和骨盆解剖
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-24 DOI: 10.1016/j.patrec.2024.11.003
Mudassar Ali , Haoji Hu , Tong Wu , Maryam Mansoor , Qiong Luo , Weizeng Zheng , Neng Jin
Accurate tumor segmentation within MRI images is of great importance for both diagnosis and treatment; however, in many cases, sufficient annotated datasets may not be available. This paper develops a novel approach to the medical image segmentation of tumors in the brain, liver, and pelvic regions within MRI images, by combining an attention-enhanced U-Net model with a cGAN. We introduce three key novelties: a patch discriminator in the cGAN to enhance realism of generated images, attention mechanisms in the U-Net to enhance the accuracy of segmentation, and finally an application to pelvic MRI segmentation, which has seen little exploration. Our method addresses the issue of limited availability of annotated data by generating realistic synthetic images to augment the process of training. Our experimental results on brain, liver, and pelvic MRI datasets show that our approach outperforms the state-of-the-art methods with a Dice Coefficient of 98.61 % for brain MRI, 88.60 % for liver MRI, and 91.93 % for pelvic MRI. We can also observe great increases in the Hausdorff Distance, at especially complex anatomical regions such as tumor boundaries. The proposed combination of synthetic data creation and novel segmentation techniques opens new perspectives for robust medical image segmentation.
MRI图像中肿瘤的准确分割对于诊断和治疗都具有重要意义。然而,在许多情况下,可能没有足够的带注释的数据集。本文通过将注意力增强的U-Net模型与cGAN相结合,开发了一种新的方法来分割MRI图像中大脑、肝脏和骨盆区域的肿瘤。我们介绍了三个关键的新技术:cGAN中的补丁鉴别器以增强生成图像的真实感,U-Net中的注意机制以提高分割的准确性,最后是骨盆MRI分割的应用,这方面的探索很少。我们的方法通过生成真实的合成图像来增强训练过程,解决了标注数据可用性有限的问题。我们在脑、肝和骨盆MRI数据集上的实验结果表明,我们的方法优于最先进的方法,脑MRI的Dice系数为98.61%,肝脏MRI为88.60%,骨盆MRI为91.93%。我们还可以观察到豪斯多夫距离的大幅增加,特别是在复杂的解剖区域,如肿瘤边界。提出的合成数据创建和新分割技术的结合为鲁棒医学图像分割开辟了新的视角。
{"title":"Segmentation of MRI tumors and pelvic anatomy via cGAN-synthesized data and attention-enhanced U-Net","authors":"Mudassar Ali ,&nbsp;Haoji Hu ,&nbsp;Tong Wu ,&nbsp;Maryam Mansoor ,&nbsp;Qiong Luo ,&nbsp;Weizeng Zheng ,&nbsp;Neng Jin","doi":"10.1016/j.patrec.2024.11.003","DOIUrl":"10.1016/j.patrec.2024.11.003","url":null,"abstract":"<div><div>Accurate tumor segmentation within MRI images is of great importance for both diagnosis and treatment; however, in many cases, sufficient annotated datasets may not be available. This paper develops a novel approach to the medical image segmentation of tumors in the brain, liver, and pelvic regions within MRI images, by combining an attention-enhanced U-Net model with a cGAN. We introduce three key novelties: a patch discriminator in the cGAN to enhance realism of generated images, attention mechanisms in the U-Net to enhance the accuracy of segmentation, and finally an application to pelvic MRI segmentation, which has seen little exploration. Our method addresses the issue of limited availability of annotated data by generating realistic synthetic images to augment the process of training. Our experimental results on brain, liver, and pelvic MRI datasets show that our approach outperforms the state-of-the-art methods with a Dice Coefficient of 98.61 % for brain MRI, 88.60 % for liver MRI, and 91.93 % for pelvic MRI. We can also observe great increases in the Hausdorff Distance, at especially complex anatomical regions such as tumor boundaries. The proposed combination of synthetic data creation and novel segmentation techniques opens new perspectives for robust medical image segmentation.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 100-106"},"PeriodicalIF":3.9,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental component tree contour computation 增量分量树轮廓计算
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-23 DOI: 10.1016/j.patrec.2024.11.019
Dennis J. Silva , Jiří Kosinka , Ronaldo F. Hashimoto , Jos B.T.M. Roerdink , Alexandre Morimitsu , Wonder A.L. Alves
A component tree is a graph representation that encodes the connected components of the upper or lower level sets of a grayscale image. Consequently, the nodes of a component tree represent binary images of the encoded connected components. There exist various algorithms that efficiently extract information and attributes of nodes of a component tree by incrementally exploiting the subset relation encoding in the tree. However, to the best of our knowledge, there is no such incremental approach to extract the contours of the nodes. In this paper, we propose an efficient incremental method to compute the contours of the nodes of a component tree by counting the edges (sides) of contour pixels. In addition, we discuss our method’s time complexity. We also experimentally show that our proposed method is faster than the standard approach based on node reconstruction.
组件树是一种图形表示,它对灰度图像的上层或下层集的连接组件进行编码。因此,组件树的节点表示编码的连接组件的二值图像。目前已有多种算法通过增量利用组件树中的子集关系编码,有效地提取组件树节点的信息和属性。然而,据我们所知,目前还没有这样的增量方法来提取节点的轮廓。在本文中,我们提出了一种有效的增量方法,通过计算轮廓像素的边(边)来计算组件树节点的轮廓。此外,我们还讨论了该方法的时间复杂度。实验结果表明,该方法比基于节点重构的标准方法更快。
{"title":"Incremental component tree contour computation","authors":"Dennis J. Silva ,&nbsp;Jiří Kosinka ,&nbsp;Ronaldo F. Hashimoto ,&nbsp;Jos B.T.M. Roerdink ,&nbsp;Alexandre Morimitsu ,&nbsp;Wonder A.L. Alves","doi":"10.1016/j.patrec.2024.11.019","DOIUrl":"10.1016/j.patrec.2024.11.019","url":null,"abstract":"<div><div>A component tree is a graph representation that encodes the connected components of the upper or lower level sets of a grayscale image. Consequently, the nodes of a component tree represent binary images of the encoded connected components. There exist various algorithms that efficiently extract information and attributes of nodes of a component tree by incrementally exploiting the subset relation encoding in the tree. However, to the best of our knowledge, there is no such incremental approach to extract the contours of the nodes. In this paper, we propose an efficient incremental method to compute the contours of the nodes of a component tree by counting the edges (sides) of contour pixels. In addition, we discuss our method’s time complexity. We also experimentally show that our proposed method is faster than the standard approach based on node reconstruction.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 115-121"},"PeriodicalIF":3.9,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multichannel image classification based on adaptive attribute profiles 基于自适应属性轮廓的多通道图像分类
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-23 DOI: 10.1016/j.patrec.2024.11.015
Wonder A.L. Alves , Wander S. Campos , Charles F. Gobber , Dennis J. Silva , Ronaldo F. Hashimoto
Morphological Attribute Profiles serve as powerful tools for extracting meaningful features from remote sensing data. The construction of Morphological Attribute Profiles relies on two primary parameters: the choice of attribute type and the definition of a numerical threshold sequence. However, selecting an appropriate threshold sequence can be a difficult task, as an inappropriate choice can lead to an uninformative feature space. In this paper, we propose a semi-automatic approach based on the theory of Maximally Stable Extremal Regions to address this challenge. Our approach takes an increasing attribute type and an initial sequence of thresholds as input and locally adjusts threshold values based on region stability within the image. Experimental results demonstrate that our method significantly increases classification accuracy through the refinement of threshold values.
形态属性轮廓是提取遥感数据中有意义特征的有力工具。形态学属性概况的构建依赖于两个主要参数:属性类型的选择和数值阈值序列的定义。然而,选择合适的阈值序列可能是一项困难的任务,因为不适当的选择可能导致信息不足的特征空间。在本文中,我们提出了一种基于极大稳定极区理论的半自动方法来解决这一挑战。我们的方法采用增加属性类型和初始阈值序列作为输入,并根据图像内的区域稳定性局部调整阈值。实验结果表明,该方法通过对阈值的细化,显著提高了分类精度。
{"title":"Multichannel image classification based on adaptive attribute profiles","authors":"Wonder A.L. Alves ,&nbsp;Wander S. Campos ,&nbsp;Charles F. Gobber ,&nbsp;Dennis J. Silva ,&nbsp;Ronaldo F. Hashimoto","doi":"10.1016/j.patrec.2024.11.015","DOIUrl":"10.1016/j.patrec.2024.11.015","url":null,"abstract":"<div><div>Morphological Attribute Profiles serve as powerful tools for extracting meaningful features from remote sensing data. The construction of Morphological Attribute Profiles relies on two primary parameters: the choice of attribute type and the definition of a numerical threshold sequence. However, selecting an appropriate threshold sequence can be a difficult task, as an inappropriate choice can lead to an uninformative feature space. In this paper, we propose a semi-automatic approach based on the theory of Maximally Stable Extremal Regions to address this challenge. Our approach takes an increasing attribute type and an initial sequence of thresholds as input and locally adjusts threshold values based on region stability within the image. Experimental results demonstrate that our method significantly increases classification accuracy through the refinement of threshold values.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 107-114"},"PeriodicalIF":3.9,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation of super-resolution for medical image via a self-prior guided Mamba network with edge-aware constraint 基于边缘感知约束的自先验引导Mamba网络生成超分辨率医学图像
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1016/j.patrec.2024.11.020
Zexin Ji , Beiji Zou , Xiaoyan Kui , Hua Li , Pierre Vera , Su Ruan
Existing deep learning-based super-resolution generation approaches usually depend on the backbone of convolutional neural networks (CNNs) or Transformers. CNN-based approaches are unable to model long-range dependencies, whereas Transformer-based approaches encounter significant computational burdens due to quadratic complexity in calculations. Moreover, high-frequency texture details in images generated by existing approaches still remain indistinct, posing a major challenge in super-resolution tasks. To overcome these problems, we propose a self-prior guided Mamba network with edge-aware constraint (SEMambaSR) for medical image super-resolution. Recently, State Space Models (SSMs), notably Mamba, have gained prominence for the ability to efficiently model long-range dependencies with low complexity. In this paper, we propose to integrate Mamba into the Unet network allowing to extract multi-scale local and global features to generate high-quality super-resolution images. Additionally, we introduce perturbations by randomly adding a brightness window to the input image, enabling the network to mine the self-prior information of the image. We also design an improved 2D-Selective-Scan (ISS2D) module to learn and adaptively fuse multi-directional long-range dependencies in image features to enhance feature representation. An edge-aware constraint is exploited to learn the multi-scale edge information from encoder features for better synthesis of texture boundaries. Our qualitative and quantitative experimental findings indicate superior super-resolution performance over current methods on IXI and BraTS2021 medical datasets. Specifically, our approach achieved a PSNR of 33.44 dB and an SSIM of 0.9371 on IXI, and a PSNR of 41.99 dB and an SSIM of 0.9846 on BraTS2021, both for 2× upsampling. The downstream vision task on brain tumor segmentation, using a U-Net network, also reveals the effectiveness of our approach, with a mean Dice Score of 57.06% on the BraTS2021 dataset.
现有的基于深度学习的超分辨率生成方法通常依赖于卷积神经网络(cnn)或变压器的主干。基于cnn的方法无法对长期依赖关系进行建模,而基于transformer的方法由于计算的二次复杂度而面临巨大的计算负担。此外,现有方法生成的图像中的高频纹理细节仍然不清晰,这对超分辨率任务构成了重大挑战。为了克服这些问题,我们提出了一种具有边缘感知约束的自先验引导Mamba网络(SEMambaSR)用于医学图像超分辨率。最近,状态空间模型(State Space Models, ssm),特别是Mamba,因为能够以低复杂性高效地建模远程依赖关系而获得了突出的地位。在本文中,我们建议将曼巴整合到Unet网络中,允许提取多尺度局部和全局特征,以生成高质量的超分辨率图像。此外,我们通过在输入图像中随机添加亮度窗口来引入扰动,使网络能够挖掘图像的自先验信息。我们还设计了一个改进的二维选择性扫描(ISS2D)模块来学习和自适应融合图像特征中的多向远程依赖关系,以增强特征表征。利用边缘感知约束从编码器特征中学习多尺度边缘信息,从而更好地合成纹理边界。我们的定性和定量实验结果表明,在IXI和BraTS2021医疗数据集上,超分辨率性能优于当前方法。具体来说,我们的方法在IXI上实现了33.44 dB的PSNR和0.9371的SSIM,在BraTS2021上实现了41.99 dB的PSNR和0.9846的SSIM,两者都是2倍上采样。使用U-Net网络的脑肿瘤分割下游视觉任务也显示了我们方法的有效性,在BraTS2021数据集上的平均Dice Score为57.06%。
{"title":"Generation of super-resolution for medical image via a self-prior guided Mamba network with edge-aware constraint","authors":"Zexin Ji ,&nbsp;Beiji Zou ,&nbsp;Xiaoyan Kui ,&nbsp;Hua Li ,&nbsp;Pierre Vera ,&nbsp;Su Ruan","doi":"10.1016/j.patrec.2024.11.020","DOIUrl":"10.1016/j.patrec.2024.11.020","url":null,"abstract":"<div><div>Existing deep learning-based super-resolution generation approaches usually depend on the backbone of convolutional neural networks (CNNs) or Transformers. CNN-based approaches are unable to model long-range dependencies, whereas Transformer-based approaches encounter significant computational burdens due to quadratic complexity in calculations. Moreover, high-frequency texture details in images generated by existing approaches still remain indistinct, posing a major challenge in super-resolution tasks. To overcome these problems, we propose a self-prior guided Mamba network with edge-aware constraint (SEMambaSR) for medical image super-resolution. Recently, State Space Models (SSMs), notably Mamba, have gained prominence for the ability to efficiently model long-range dependencies with low complexity. In this paper, we propose to integrate Mamba into the Unet network allowing to extract multi-scale local and global features to generate high-quality super-resolution images. Additionally, we introduce perturbations by randomly adding a brightness window to the input image, enabling the network to mine the self-prior information of the image. We also design an improved 2D-Selective-Scan (ISS2D) module to learn and adaptively fuse multi-directional long-range dependencies in image features to enhance feature representation. An edge-aware constraint is exploited to learn the multi-scale edge information from encoder features for better synthesis of texture boundaries. Our qualitative and quantitative experimental findings indicate superior super-resolution performance over current methods on IXI and BraTS2021 medical datasets. Specifically, our approach achieved a PSNR of 33.44 dB and an SSIM of 0.9371 on IXI, and a PSNR of 41.99 dB and an SSIM of 0.9846 on BraTS2021, both for 2<span><math><mo>×</mo></math></span> upsampling. The downstream vision task on brain tumor segmentation, using a U-Net network, also reveals the effectiveness of our approach, with a mean Dice Score of 57.06% on the BraTS2021 dataset.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 93-99"},"PeriodicalIF":3.9,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototypical class-wise test-time adaptation 原型类测试时间适应性
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1016/j.patrec.2024.10.011
Hojoon Lee , Seunghwan Lee , Inyoung Jung , Sungeun Hong
Test-time adaptation (TTA) refines pre-trained models during deployment, enabling them to effectively manage new, previously unseen data. However, existing TTA methods focus mainly on global domain alignment, which reduces domain-level gaps but often leads to suboptimal performance. This is because they fail to explicitly consider class-wise alignment, resulting in errors when reliable pseudo-labels are unavailable and source domain samples are inaccessible. In this study, we propose a prototypical class-wise test-time adaptation method, which consists of class-wise prototype adaptation and reliable pseudo-labeling. A main challenge in this approach is the lack of direct access to source domain samples. We leverage the class-specific knowledge contained in the weights of the pre-trained model. To construct class prototypes from the unlabeled target domain, we further introduce a methodology to enhance the reliability of pseudo labels. Our method is adaptable to various models and has been extensively validated, consistently outperforming baselines across multiple benchmark datasets.
测试时间适应(TTA)可在部署过程中完善预训练模型,使其能够有效管理以前未见过的新数据。然而,现有的 TTA 方法主要侧重于全域对齐,这虽然减少了领域级差距,但往往会导致性能不达标。这是因为这些方法没有明确考虑类对齐,导致在无法获得可靠的伪标签和源领域样本时出现错误。在本研究中,我们提出了一种原型分类测试时间适应方法,该方法由原型分类适应和可靠的伪标签组成。这种方法面临的主要挑战是无法直接获取源领域样本。我们利用预训练模型权重中包含的特定类知识。为了从未标明的目标域中构建类原型,我们进一步引入了一种方法来提高伪标签的可靠性。我们的方法适用于各种模型,并经过广泛验证,在多个基准数据集上的表现始终优于基线方法。
{"title":"Prototypical class-wise test-time adaptation","authors":"Hojoon Lee ,&nbsp;Seunghwan Lee ,&nbsp;Inyoung Jung ,&nbsp;Sungeun Hong","doi":"10.1016/j.patrec.2024.10.011","DOIUrl":"10.1016/j.patrec.2024.10.011","url":null,"abstract":"<div><div>Test-time adaptation (TTA) refines pre-trained models during deployment, enabling them to effectively manage new, previously unseen data. However, existing TTA methods focus mainly on global domain alignment, which reduces domain-level gaps but often leads to suboptimal performance. This is because they fail to explicitly consider class-wise alignment, resulting in errors when reliable pseudo-labels are unavailable and source domain samples are inaccessible. In this study, we propose a prototypical class-wise test-time adaptation method, which consists of class-wise prototype adaptation and reliable pseudo-labeling. A main challenge in this approach is the lack of direct access to source domain samples. We leverage the class-specific knowledge contained in the weights of the pre-trained model. To construct class prototypes from the unlabeled target domain, we further introduce a methodology to enhance the reliability of pseudo labels. Our method is adaptable to various models and has been extensively validated, consistently outperforming baselines across multiple benchmark datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 49-55"},"PeriodicalIF":3.9,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detailed evaluation of a population-wise personalization approach to generate synthetic myocardial infarct images 详细评估了一种基于人群的个性化方法来生成合成心肌梗死图像
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1016/j.patrec.2024.11.017
Anastasia Konik , Patrick Clarysse , Nicolas Duchateau
Personalization of biophysical models to real data is essential to achieve realistic simulations or generate relevant synthetic populations. However, some of these models involve randomness, which poses two challenges: they do not allow the standard personalization to each individual’s data and they lack an analytical formulation required for optimization. In previous work, we introduced a population-based personalization strategy which overcomes these challenges and demonstrated its feasibility on simple 2D geometrical models of myocardial infarct. The method consists in matching the distributions of the synthetic and real populations, quantified through the Kullback–Leibler (KL) divergence. Personalization is achieved with a gradient-free algorithm (CMA-ES), which generates sets of candidate solutions represented by their covariance matrix, whose coefficients evolve until the synthetic and real data are matched. However, the robustness of this strategy regarding settings and more complex data was not challenged. In this work, we specifically address these points, with (i) an improved design, (ii) a thorough evaluation on crucial aspects of the personalization process, including hyperparameters and initialization, and (iii) the application to 3D data. Despite some limits of the simple geometrical models used, our method is able to capture the main characteristics of the real data, as demonstrated both on 2D and 3D segmented late Gadolinium images of 123 subjects with acute myocardial infarction.
将生物物理模型个性化为真实数据对于实现真实的模拟或生成相关的合成种群至关重要。然而,其中一些模型涉及随机性,这带来了两个挑战:它们不允许对每个人的数据进行标准个性化,并且它们缺乏优化所需的分析公式。在之前的工作中,我们介绍了一种基于人群的个性化策略,克服了这些挑战,并在简单的二维心肌梗死几何模型上证明了其可行性。该方法包括匹配合成种群和真实种群的分布,通过Kullback-Leibler (KL)散度量化。个性化是通过无梯度算法(CMA-ES)实现的,该算法生成一组候选解,由它们的协方差矩阵表示,其系数不断进化,直到合成数据和真实数据匹配。然而,对于设置和更复杂的数据,这种策略的稳健性没有受到挑战。在这项工作中,我们通过(i)改进的设计,(ii)对个性化过程的关键方面进行全面评估,包括超参数和初始化,以及(iii)对3D数据的应用,专门解决了这些问题。尽管所使用的简单几何模型存在一些局限性,但我们的方法能够捕获真实数据的主要特征,正如123例急性心肌梗死受试者的2D和3D分割晚期钆图像所证明的那样。
{"title":"Detailed evaluation of a population-wise personalization approach to generate synthetic myocardial infarct images","authors":"Anastasia Konik ,&nbsp;Patrick Clarysse ,&nbsp;Nicolas Duchateau","doi":"10.1016/j.patrec.2024.11.017","DOIUrl":"10.1016/j.patrec.2024.11.017","url":null,"abstract":"<div><div>Personalization of biophysical models to real data is essential to achieve realistic simulations or generate relevant synthetic populations. However, some of these models involve randomness, which poses two challenges: they do not allow the standard personalization to each individual’s data and they lack an analytical formulation required for optimization. In previous work, we introduced a population-based personalization strategy which overcomes these challenges and demonstrated its feasibility on simple 2D geometrical models of myocardial infarct. The method consists in matching the distributions of the synthetic and real populations, quantified through the Kullback–Leibler (KL) divergence. Personalization is achieved with a gradient-free algorithm (CMA-ES), which generates sets of candidate solutions represented by their covariance matrix, whose coefficients evolve until the synthetic and real data are matched. However, the robustness of this strategy regarding settings and more complex data was not challenged. In this work, we specifically address these points, with (i) an improved design, (ii) a thorough evaluation on crucial aspects of the personalization process, including hyperparameters and initialization, and (iii) the application to 3D data. Despite some limits of the simple geometrical models used, our method is able to capture the main characteristics of the real data, as demonstrated both on 2D and 3D segmented late Gadolinium images of 123 subjects with acute myocardial infarction.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 8-14"},"PeriodicalIF":3.9,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142742881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving ViT interpretability with patch-level mask prediction 利用光斑级掩膜预测提高 ViT 的可解释性
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-22 DOI: 10.1016/j.patrec.2024.11.018
Junyong Kang , Byeongho Heo , Junsuk Choe
Vision Transformers (ViTs) have demonstrated remarkable performances on various computer vision tasks. Attention scores are often used to explain the decision-making process of ViTs, showing which tokens are more important than others. However, the attention scores have several limitations as an explanation for ViT, such as conflicting with other explainable methods or highlighting unrelated tokens. In order to address this limitation, we propose a novel method for generating a visual explanation map from ViTs. Unlike previous approaches that rely on attention scores, our method leverages ViT features and conducts a single forward pass through our Patch-level Mask prediction (PM) module. Our visual explanation map provides class-dependent and probabilistic interpretation that can identify crucial regions of model decisions. Experimental results demonstrate that our approach outperforms previous techniques in both classification and interpretability aspects. Additionally, it can be applied to the weakly-supervised object localization (WSOL) tasks using pseudo mask labels. Our method requires no extra parameters and necessitates minimal locality supervision, utilizing less than 1% of the ImageNet-1k training dataset.
视觉转换器(ViTs)在各种计算机视觉任务中表现出了卓越的性能。注意力分数通常被用来解释 ViTs 的决策过程,显示哪些标记比其他标记更重要。然而,注意力分数作为对 ViT 的解释有一些局限性,例如与其他可解释方法相冲突或突出不相关的标记。为了解决这一局限性,我们提出了一种从 ViT 生成视觉解释图的新方法。与以往依赖注意力分数的方法不同,我们的方法利用了 ViT 的特征,并通过我们的补丁级掩码预测(PM)模块进行一次前向传递。我们的视觉解释图提供了与类别相关的概率解释,可以识别出模型决策的关键区域。实验结果表明,我们的方法在分类和可解释性方面都优于之前的技术。此外,它还可用于使用伪掩码标签的弱监督对象定位(WSOL)任务。我们的方法不需要额外的参数,只需最小的定位监督,使用不到 ImageNet-1k 训练数据集的 1%。
{"title":"Improving ViT interpretability with patch-level mask prediction","authors":"Junyong Kang ,&nbsp;Byeongho Heo ,&nbsp;Junsuk Choe","doi":"10.1016/j.patrec.2024.11.018","DOIUrl":"10.1016/j.patrec.2024.11.018","url":null,"abstract":"<div><div>Vision Transformers (ViTs) have demonstrated remarkable performances on various computer vision tasks. Attention scores are often used to explain the decision-making process of ViTs, showing which tokens are more important than others. However, the attention scores have several limitations as an explanation for ViT, such as conflicting with other explainable methods or highlighting unrelated tokens. In order to address this limitation, we propose a novel method for generating a visual explanation map from ViTs. Unlike previous approaches that rely on attention scores, our method leverages ViT features and conducts a single forward pass through our Patch-level Mask prediction (PM) module. Our visual explanation map provides class-dependent and probabilistic interpretation that can identify crucial regions of model decisions. Experimental results demonstrate that our approach outperforms previous techniques in both classification and interpretability aspects. Additionally, it can be applied to the weakly-supervised object localization (WSOL) tasks using pseudo mask labels. Our method requires no extra parameters and necessitates minimal locality supervision, utilizing less than 1% of the ImageNet-1k training dataset.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 73-79"},"PeriodicalIF":3.9,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GANzzle++: Generative approaches for jigsaw puzzle solving as local to global assignment in latent spatial representations GANzzle++:潜在空间表征中从局部到全局分配的拼图游戏生成方法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-19 DOI: 10.1016/j.patrec.2024.11.010
Davide Talon , Alessio Del Bue , Stuart James
Jigsaw puzzles are a popular and enjoyable pastime that humans can easily solve, even with many pieces. However, solving a jigsaw is a combinatorial problem, and the space of possible solutions is exponential in the number of pieces, intractable for pairwise solutions. In contrast to the classical pairwise local matching of pieces based on edge heuristics, we estimate an approximate solution image, i.e., a mental image, of the puzzle and exploit it to guide the placement of pieces as a piece-to-global assignment problem. Therefore, from unordered pieces, we consider conditioned generation approaches, including Generative Adversarial Networks (GAN) models, Slot Attention (SA) and Vision Transformers (ViT), to recover the solution image. Given the generated solution representation, we cast the jigsaw solving as a 1-to-1 assignment matching problem using Hungarian attention, which places pieces in corresponding positions in the global solution estimate. Results show that the newly proposed GANzzle-SA and GANzzle-VIT benefit from the early fusion strategy where pieces are jointly compressed and gathered for global structure recovery. A single deep learning model generalizes to puzzles of different sizes and improves the performances by a large margin. Evaluated on PuzzleCelebA and PuzzleWikiArts, our approaches bridge the gap of deep learning strategies with respect to optimization-based classic puzzle solvers.
拼图是一种流行且令人愉快的消遣方式,即使拼图块数很多,人类也能轻松解决。然而,拼图的解法是一个组合问题,可能的解法空间与拼图块的数量成指数关系,对于成对解法来说是难以解决的。与传统的基于边缘启发式的成对局部匹配相比,我们估算出了拼图的近似解图像,即心理图像,并利用它来指导拼图的摆放,将其作为一个 "拼图到整体 "的分配问题。因此,我们考虑采用条件生成法,包括生成对抗网络(GAN)模型、片段注意力(SA)和视觉转换器(ViT),来恢复无序拼图的解图像。鉴于生成的解决方案表示,我们使用匈牙利注意力将拼图解法作为 1 对 1 的分配匹配问题,将拼图块放置在全局解决方案估计中的相应位置。结果表明,新提出的 GANzzle-SA 和 GANzzle-VIT 从早期融合策略中获益匪浅,在早期融合策略中,碎片被联合压缩并收集起来,以恢复全局结构。单一深度学习模型适用于不同大小的谜题,并大大提高了性能。通过在 PuzzleCelebA 和 PuzzleWikiArts 上进行评估,我们的方法弥补了深度学习策略与基于优化的经典谜题求解器之间的差距。
{"title":"GANzzle++: Generative approaches for jigsaw puzzle solving as local to global assignment in latent spatial representations","authors":"Davide Talon ,&nbsp;Alessio Del Bue ,&nbsp;Stuart James","doi":"10.1016/j.patrec.2024.11.010","DOIUrl":"10.1016/j.patrec.2024.11.010","url":null,"abstract":"<div><div>Jigsaw puzzles are a popular and enjoyable pastime that humans can easily solve, even with many pieces. However, solving a jigsaw is a combinatorial problem, and the space of possible solutions is exponential in the number of pieces, intractable for pairwise solutions. In contrast to the classical pairwise local matching of pieces based on edge heuristics, we estimate an approximate solution image, i.e., a <em>mental image</em>, of the puzzle and exploit it to guide the placement of pieces as a piece-to-global assignment problem. Therefore, from unordered pieces, we consider conditioned generation approaches, including Generative Adversarial Networks (GAN) models, Slot Attention (SA) and Vision Transformers (ViT), to recover the solution image. Given the generated solution representation, we cast the jigsaw solving as a 1-to-1 assignment matching problem using Hungarian attention, which places pieces in corresponding positions in the global solution estimate. Results show that the newly proposed GANzzle-SA and GANzzle-VIT benefit from the early fusion strategy where pieces are jointly compressed and gathered for global structure recovery. A single deep learning model generalizes to puzzles of different sizes and improves the performances by a large margin. Evaluated on PuzzleCelebA and PuzzleWikiArts, our approaches bridge the gap of deep learning strategies with respect to optimization-based classic puzzle solvers.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"187 ","pages":"Pages 35-41"},"PeriodicalIF":3.9,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1