首页 > 最新文献

The Visual Computer最新文献

英文 中文
Predicting pancreatic diseases from fundus images using deep learning 利用深度学习从眼底图像预测胰腺疾病
Pub Date : 2024-09-12 DOI: 10.1007/s00371-024-03619-5
Yiting Wu, Pinqi Fang, Xiangning Wang, Jie Shen

Pancreatic cancer (PC) is an extremely deadly cancer, with mortality rates closely tied to its frequency of occurrence. By the time of diagnosis, pancreatic cancer often presents at an advanced stage, and has often spread to other parts of the body. Due to the poor survival outcomes, PDAC is the fifth leading cause of global cancer death. The 5-year relative survival rate of pancreatic cancer was about 6% and the lowest level in all cancers. Currently, there are no established guidance for screening individuals at high risk for pancreatic cancer, including those with a family history of the pancreatic disease or chronic pancreatitis (CP). With the development of medicine, fundus maps can now predict many systemic diseases. Subsequently, the association between ocular changes and a few pancreatic diseases was also discovered. Therefore, our objective is to construct a deep learning model aimed at identifying correlations between ocular features and significant pancreatic ailments. The utilization of AI and fundus images has extended beyond the investigation of ocular disorders. Hence, in order to solve the tasks of PC and CP classification, we propose a brand new deep learning model (PANet) that integrates pre-trained CNN network, multi-scale feature modules, attention mechanisms, and an FC classifier. PANet adopts a ResNet34 backbone and selectively integrates attention modules to construct its fundamental architecture. To enhance feature extraction capability, PANet combines multi-scale feature modules before the attention module. Our model is trained and evaluated using a dataset comprising 1300 fundus images. The experimental outcomes illustrate the successful realization of our objectives, with the model achieving an accuracy of 91.50% and an area under the receiver operating characteristic curve (AUC) of 96.00% in PC classification, and an accuracy of 95.60% and an AUC of 99.20% in CP classification. Our study establishes a characterizing link between ocular features and major pancreatic diseases, providing a non-invasive, convenient, and complementary method for screening and detection of pancreatic diseases.

胰腺癌(PC)是一种极其致命的癌症,死亡率与其发病频率密切相关。在确诊时,胰腺癌往往已是晚期,而且往往已扩散到身体的其他部位。由于生存率低,胰腺癌已成为全球第五大癌症死因。胰腺癌的 5 年相对生存率约为 6%,是所有癌症中最低的。目前,对于胰腺癌高危人群(包括有胰腺疾病家族史或慢性胰腺炎(CP)患者)的筛查还没有既定的指南。随着医学的发展,眼底图现在可以预测许多系统性疾病。随后,眼底变化与一些胰腺疾病之间的关联也被发现。因此,我们的目标是构建一个深度学习模型,旨在识别眼部特征与重大胰腺疾病之间的相关性。人工智能和眼底图像的应用已经超出了眼部疾病的调查范围。因此,为了解决 PC 和 CP 分类任务,我们提出了一种全新的深度学习模型(PANet),该模型集成了预训练 CNN 网络、多尺度特征模块、注意力机制和 FC 分类器。PANet 采用 ResNet34 作为骨干,并有选择地集成了注意力模块,从而构建了其基本架构。为了增强特征提取能力,PANet 在注意力模块之前结合了多尺度特征模块。我们使用由 1300 张眼底图像组成的数据集对我们的模型进行了训练和评估。实验结果表明,模型成功实现了我们的目标,在 PC 分类中的准确率达到 91.50%,接收者操作特征曲线下面积(AUC)达到 96.00%;在 CP 分类中的准确率达到 95.60%,接收者操作特征曲线下面积(AUC)达到 99.20%。我们的研究建立了眼部特征与主要胰腺疾病之间的联系,为胰腺疾病的筛查和检测提供了一种无创、便捷的补充方法。
{"title":"Predicting pancreatic diseases from fundus images using deep learning","authors":"Yiting Wu, Pinqi Fang, Xiangning Wang, Jie Shen","doi":"10.1007/s00371-024-03619-5","DOIUrl":"https://doi.org/10.1007/s00371-024-03619-5","url":null,"abstract":"<p>Pancreatic cancer (PC) is an extremely deadly cancer, with mortality rates closely tied to its frequency of occurrence. By the time of diagnosis, pancreatic cancer often presents at an advanced stage, and has often spread to other parts of the body. Due to the poor survival outcomes, PDAC is the fifth leading cause of global cancer death. The 5-year relative survival rate of pancreatic cancer was about 6% and the lowest level in all cancers. Currently, there are no established guidance for screening individuals at high risk for pancreatic cancer, including those with a family history of the pancreatic disease or chronic pancreatitis (CP). With the development of medicine, fundus maps can now predict many systemic diseases. Subsequently, the association between ocular changes and a few pancreatic diseases was also discovered. Therefore, our objective is to construct a deep learning model aimed at identifying correlations between ocular features and significant pancreatic ailments. The utilization of AI and fundus images has extended beyond the investigation of ocular disorders. Hence, in order to solve the tasks of PC and CP classification, we propose a brand new deep learning model (PANet) that integrates pre-trained CNN network, multi-scale feature modules, attention mechanisms, and an FC classifier. PANet adopts a ResNet34 backbone and selectively integrates attention modules to construct its fundamental architecture. To enhance feature extraction capability, PANet combines multi-scale feature modules before the attention module. Our model is trained and evaluated using a dataset comprising 1300 fundus images. The experimental outcomes illustrate the successful realization of our objectives, with the model achieving an accuracy of 91.50% and an area under the receiver operating characteristic curve (AUC) of 96.00% in PC classification, and an accuracy of 95.60% and an AUC of 99.20% in CP classification. Our study establishes a characterizing link between ocular features and major pancreatic diseases, providing a non-invasive, convenient, and complementary method for screening and detection of pancreatic diseases.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modal fusion network with dual attention mechanism for 6D pose estimation 具有双重关注机制的模态融合网络用于 6D 姿态估计
Pub Date : 2024-09-06 DOI: 10.1007/s00371-024-03614-w
Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang

The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.

基于 RGB-D 数据的 6D 姿态估计在计算机视觉及相关领域具有重要的应用价值。目前,深度学习方法通常采用卷积网络进行特征提取,这种方法对近距离的关键点敏感,但忽略了远距离关键点的相关信息。此外,在后续阶段,无法有效融合空间特征(深度通道特征)和色彩纹理特征(RGB 通道特征)。因此,这一局限性导致现有基于 RGB-D 数据的 6D 姿态网络的准确性大打折扣。为解决这一问题,我们提出了一种新型端到端 6D 姿态估计网络。在深度数据处理网络分支中,利用掩膜向量的注意力机制建立全局空间权重,实现深度特征的鲁棒提取。在特征融合阶段,引入了对称融合模块。在该模块中,空间特征和颜色纹理特征通过交叉注意机制进行自相关融合。在 LINEMOD 和 LINEMOD-OCLUSION 数据集上进行了实验评估,我们的方法的 ADD(-S) 分数分别达到了 95.84% 和 47.89%。与最先进的方法相比,我们的方法在对形状复杂的物体进行姿态估计时表现出更优越的性能。此外,在存在遮挡的情况下,我们的方法对不对称物体的姿态估计精度也得到了有效提高。
{"title":"A modal fusion network with dual attention mechanism for 6D pose estimation","authors":"Liangrui Wei, Feifei Xie, Lin Sun, Jinpeng Chen, Zhipeng Zhang","doi":"10.1007/s00371-024-03614-w","DOIUrl":"https://doi.org/10.1007/s00371-024-03614-w","url":null,"abstract":"<p>The 6D pose estimation based on RGB-D data holds significant application value in computer vision and related fields. Currently, deep learning methods commonly employ convolutional networks for feature extraction, which are sensitive to keypoints at close distances but overlook information related to keypoints at longer distances. Moreover, in subsequent stages, there is a failure to effectively fuse spatial features (depth channel features) and color texture features (RGB channel features). Consequently, this limitation results in compromised accuracy in existing 6D pose networks based on RGB-D data. To solve this problem, a novel end-to-end 6D pose estimation network is proposed. In the branch of depth data processing network, the global spatial weight is established by using the attention mechanism of mask vector to realize robust extraction of depth features. In the phase of feature fusion, a symmetric fusion module is introduced. In this module, spatial features and color texture features are self-related fused by means of cross-attention mechanism. Experimental evaluations were performed on the LINEMOD and LINEMOD-OCLUSION datasets, and the ADD(-S) scores of our method can reach 95.84% and 47.89%, respectively. Compared to state-of-the-art methods, our method demonstrates superior performance in pose estimation for objects with complex shapes. Moreover, in the presence of occlusion, the pose estimation accuracy of our method for asymmetric objects has been effectively improved.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HCT-Unet: multi-target medical image segmentation via a hybrid CNN-transformer Unet incorporating multi-axis gated multi-layer perceptron HCT-Unet:通过包含多轴门控多层感知器的混合 CNN 变换器 Unet 进行多目标医学图像分割
Pub Date : 2024-09-06 DOI: 10.1007/s00371-024-03612-y
Yazhuo Fan, Jianhua Song, Lei Yuan, Yunlin Jia

In recent years, for the purpose of integrating the individual strengths of convolutional neural networks (CNN) and Transformer, a network structure has been built to integrate the two methods in medical image segmentation. But most of the methods only integrate CNN and Transformer at a single level and cannot extract low-level detail features and high-level abstract information simultaneously. Meanwhile, this structure lacks flexibility, unable to dynamically adjust the contributions of different feature maps. To address these limitations, we introduce HCT-Unet, a hybrid CNN-Transformer model specifically designed for multi-organ medical images segmentation. HCT-Unet introduces a tunable hybrid paradigm that differs significantly from conventional hybrid architectures. It deploys powerful CNN to capture short-range information and Transformer to extract long-range information at each stage. Furthermore, we have designed a multi-functional multi-scale fusion bridge, which progressively integrates information from different scales and dynamically modifies attention weights for both local and global features. With the benefits of these two innovative designs, HCT-Unet demonstrates robust discriminative dependency and representation capabilities in multi-target medical image tasks. Experimental results reveal the remarkable performance of our approach in medical image segmentation tasks. Specifically, in multi-organ segmentation tasks, HCT-Unet achieved a Dice similarity coefficient (DSC) of 82.23%. Furthermore, in cardiac segmentation tasks, it reached a DSC of 91%, significantly outperforming previous state-of-the-art networks. The code has been released on Zenodo: https://zenodo.org/doi/10.5281/zenodo.11070837.

近年来,为了整合卷积神经网络(CNN)和变换器(Transformer)的各自优势,人们建立了一种网络结构,将这两种方法整合到医学图像分割中。但大多数方法只是在单一层面上整合了 CNN 和 Transformer,无法同时提取低层次细节特征和高层次抽象信息。同时,这种结构缺乏灵活性,无法动态调整不同特征图的贡献。为了解决这些局限性,我们引入了 HCT-Unet,这是一种专为多器官医学图像分割设计的混合 CNN-Transformer 模型。HCT-Unet 引入了一种可调整的混合模式,与传统的混合架构有很大不同。它部署了功能强大的 CNN 来捕捉短程信息,并在每个阶段部署 Transformer 来提取远程信息。此外,我们还设计了一个多功能多尺度融合桥,它能逐步整合来自不同尺度的信息,并动态修改局部和全局特征的注意力权重。借助这两项创新设计的优势,HCT-Unet 在多目标医学图像任务中表现出了强大的判别依赖性和表示能力。实验结果表明,我们的方法在医学图像分割任务中表现出色。具体来说,在多器官分割任务中,HCT-Unet 的戴斯相似系数(DSC)达到了 82.23%。此外,在心脏分割任务中,HCT-Unet 的 DSC 高达 91%,明显优于之前最先进的网络。代码已在 Zenodo 上发布:https://zenodo.org/doi/10.5281/zenodo.11070837。
{"title":"HCT-Unet: multi-target medical image segmentation via a hybrid CNN-transformer Unet incorporating multi-axis gated multi-layer perceptron","authors":"Yazhuo Fan, Jianhua Song, Lei Yuan, Yunlin Jia","doi":"10.1007/s00371-024-03612-y","DOIUrl":"https://doi.org/10.1007/s00371-024-03612-y","url":null,"abstract":"<p>In recent years, for the purpose of integrating the individual strengths of convolutional neural networks (CNN) and Transformer, a network structure has been built to integrate the two methods in medical image segmentation. But most of the methods only integrate CNN and Transformer at a single level and cannot extract low-level detail features and high-level abstract information simultaneously. Meanwhile, this structure lacks flexibility, unable to dynamically adjust the contributions of different feature maps. To address these limitations, we introduce HCT-Unet, a hybrid CNN-Transformer model specifically designed for multi-organ medical images segmentation. HCT-Unet introduces a tunable hybrid paradigm that differs significantly from conventional hybrid architectures. It deploys powerful CNN to capture short-range information and Transformer to extract long-range information at each stage. Furthermore, we have designed a multi-functional multi-scale fusion bridge, which progressively integrates information from different scales and dynamically modifies attention weights for both local and global features. With the benefits of these two innovative designs, HCT-Unet demonstrates robust discriminative dependency and representation capabilities in multi-target medical image tasks. Experimental results reveal the remarkable performance of our approach in medical image segmentation tasks. Specifically, in multi-organ segmentation tasks, HCT-Unet achieved a Dice similarity coefficient (DSC) of 82.23%. Furthermore, in cardiac segmentation tasks, it reached a DSC of 91%, significantly outperforming previous state-of-the-art networks. The code has been released on Zenodo: https://zenodo.org/doi/10.5281/zenodo.11070837.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection 制作不可感知和可转移的对抗示例:利用条件残差发生器和小波变换欺骗深度伪造检测
Pub Date : 2024-09-06 DOI: 10.1007/s00371-024-03605-x
Zhiyuan Li, Xin Jin, Qian Jiang, Puming Wang, Shin-Jye Lee, Shaowen Yao, Wei Zhou

The malicious abuse of deepfakes has raised serious ethical, security, and privacy concerns, eroding public trust in digital media. While existing deepfake detectors can detect fake images, they are vulnerable to adversarial attacks. Although various adversarial attacks have been explored, most are white-box attacks difficult to realize in practice, and the generated adversarial examples have poor quality easily noticeable to the human eye. For this detection task, the goal should be to generate adversarial examples that can deceive detectors while maintaining high quality and authenticity. We propose a method to generate imperceptible and transferable adversarial examples aimed at fooling unknown deepfake detectors. The method combines a conditional residual generator with an accessible detector as a surrogate model, utilizing the detector’s relative distance loss function to generate highly transferable adversarial examples. Discrete wavelet transform is also introduced to enhance image quality. Extensive experiments demonstrate that the adversarial examples generated by our method not only possess excellent visual quality but also effectively deceive various detectors, exhibiting superior cross-detector transferability in black-box attacks. Our code is available at:https://github.com/SiSuiyuHang/ITA.

恶意滥用深度伪造图像引发了严重的道德、安全和隐私问题,削弱了公众对数字媒体的信任。虽然现有的深度伪造检测器可以检测出伪造图像,但它们很容易受到对抗性攻击。虽然人们已经探索了各种对抗攻击,但大多数都是白盒攻击,很难在实践中实现,而且生成的对抗示例质量很差,人眼很容易察觉。对于这项检测任务,目标应该是生成既能欺骗检测器,又能保持高质量和真实性的对抗示例。我们提出了一种生成不易察觉且可转移的对抗示例的方法,旨在欺骗未知的深度伪造检测器。该方法将条件残差生成器与作为替代模型的可访问检测器相结合,利用检测器的相对距离损失函数生成高度可转移的对抗示例。此外,还引入了离散小波变换来提高图像质量。大量实验证明,我们的方法生成的对抗示例不仅具有出色的视觉质量,还能有效欺骗各种检测器,在黑盒攻击中表现出卓越的跨检测器可转移性。我们的代码可在以下网址获取:https://github.com/SiSuiyuHang/ITA。
{"title":"Crafting imperceptible and transferable adversarial examples: leveraging conditional residual generator and wavelet transforms to deceive deepfake detection","authors":"Zhiyuan Li, Xin Jin, Qian Jiang, Puming Wang, Shin-Jye Lee, Shaowen Yao, Wei Zhou","doi":"10.1007/s00371-024-03605-x","DOIUrl":"https://doi.org/10.1007/s00371-024-03605-x","url":null,"abstract":"<p>The malicious abuse of deepfakes has raised serious ethical, security, and privacy concerns, eroding public trust in digital media. While existing deepfake detectors can detect fake images, they are vulnerable to adversarial attacks. Although various adversarial attacks have been explored, most are white-box attacks difficult to realize in practice, and the generated adversarial examples have poor quality easily noticeable to the human eye. For this detection task, the goal should be to generate adversarial examples that can deceive detectors while maintaining high quality and authenticity. We propose a method to generate imperceptible and transferable adversarial examples aimed at fooling unknown deepfake detectors. The method combines a conditional residual generator with an accessible detector as a surrogate model, utilizing the detector’s relative distance loss function to generate highly transferable adversarial examples. Discrete wavelet transform is also introduced to enhance image quality. Extensive experiments demonstrate that the adversarial examples generated by our method not only possess excellent visual quality but also effectively deceive various detectors, exhibiting superior cross-detector transferability in black-box attacks. Our code is available at:https://github.com/SiSuiyuHang/ITA.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HASN: hybrid attention separable network for efficient image super-resolution HASN:用于高效图像超分辨率的混合注意力可分离网络
Pub Date : 2024-09-03 DOI: 10.1007/s00371-024-03610-0
Weifeng Cao, Xiaoyan Lei, Jun Shi, Wanyong Liang, Jie Liu, Zongfei Bai

Recently, lightweight methods for single-image super-resolution have gained significant popularity and achieved impressive performance due to limited hardware resources. These methods demonstrate that adopting residual feature distillation is an effective way to enhance performance. However, we find that using residual connections after each block increases the model’s storage and computational cost. Therefore, to simplify the network structure and learn higher-level features and relationships between features, we use depth-wise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules. This significantly reduces computational load and the number of parameters while maintaining strong feature extraction capabilities. To further enhance model performance, we propose the hybrid attention separable block, which combines channel attention and spatial attention, thus making use of their complementary advantages. Additionally, we use depth-wise separable convolutions instead of standard convolutions, significantly reducing the computational load and the number of parameters while maintaining strong feature extraction capabilities. During the training phase, we also adopt a warm-start retraining strategy to exploit the potential of the model further. Extensive experiments demonstrate the effectiveness of our approach. Our method achieves a smaller model size and reduced computational complexity without compromising performance. Code can be available at https://github.com/nathan66666/HASN.git

最近,用于单图像超分辨率的轻量级方法大受欢迎,并在硬件资源有限的情况下取得了令人印象深刻的性能。这些方法表明,采用残差特征蒸馏是提高性能的有效方法。然而,我们发现在每个区块后使用残差连接会增加模型的存储和计算成本。因此,为了简化网络结构,学习更高层次的特征和特征之间的关系,我们使用深度可分离卷积、全连接层和激活函数作为基本的特征提取模块。这在保持强大特征提取能力的同时,大大减少了计算负荷和参数数量。为了进一步提高模型性能,我们提出了混合注意力可分离块,它结合了通道注意力和空间注意力,从而发挥了两者的互补优势。此外,我们使用深度可分离卷积代替标准卷积,在保持强大特征提取能力的同时,大大减少了计算负荷和参数数量。在训练阶段,我们还采用了热启动再训练策略,以进一步挖掘模型的潜力。大量实验证明了我们方法的有效性。我们的方法在不影响性能的前提下,缩小了模型尺寸,降低了计算复杂度。代码见 https://github.com/nathan66666/HASN.git
{"title":"HASN: hybrid attention separable network for efficient image super-resolution","authors":"Weifeng Cao, Xiaoyan Lei, Jun Shi, Wanyong Liang, Jie Liu, Zongfei Bai","doi":"10.1007/s00371-024-03610-0","DOIUrl":"https://doi.org/10.1007/s00371-024-03610-0","url":null,"abstract":"<p>Recently, lightweight methods for single-image super-resolution have gained significant popularity and achieved impressive performance due to limited hardware resources. These methods demonstrate that adopting residual feature distillation is an effective way to enhance performance. However, we find that using residual connections after each block increases the model’s storage and computational cost. Therefore, to simplify the network structure and learn higher-level features and relationships between features, we use depth-wise separable convolutions, fully connected layers, and activation functions as the basic feature extraction modules. This significantly reduces computational load and the number of parameters while maintaining strong feature extraction capabilities. To further enhance model performance, we propose the hybrid attention separable block, which combines channel attention and spatial attention, thus making use of their complementary advantages. Additionally, we use depth-wise separable convolutions instead of standard convolutions, significantly reducing the computational load and the number of parameters while maintaining strong feature extraction capabilities. During the training phase, we also adopt a warm-start retraining strategy to exploit the potential of the model further. Extensive experiments demonstrate the effectiveness of our approach. Our method achieves a smaller model size and reduced computational complexity without compromising performance. Code can be available at https://github.com/nathan66666/HASN.git</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TP-LSM: visual temporal pyramidal time modeling network to multi-label action detection in image-based AI TP-LSM:视觉时空金字塔时间建模网络,用于基于图像的人工智能中的多标签动作检测
Pub Date : 2024-08-30 DOI: 10.1007/s00371-024-03601-1
Haojie Gao, Peishun Liu, Xiaolong Ma, Zikang Yan, Ningning Ma, Wenqiang Liu, Xuefang Wang, Ruichun Tang

Dense multi-label action detection is a challenging task in the field of visual action, where multiple actions occur simultaneously in different time spans, hence accurately assessing the short-term and long-term temporal dependencies between actions is crucial for action detection. There is an urgent need for an effective temporal modeling technology to detect the temporal dependence of actions in videos and efficiently learn long-term and short-term action information. This paper proposes a new method based on temporal pyramid and long short-term time modeling for multi-label action detection, which combines hierarchical structure with pyramid feature hierarchy for dense multi-label temporal action detection. By using the expansion and compression convolution module (SEC) and external attention for time modeling, we focus on the temporal relationships of long and short-term actions at each stage. We then integrate hierarchical pyramid features to achieve accurate detection of actions at different temporal resolution scales. We evaluated the performance of the model on dense multi-label benchmark datasets, and achieved mAP of 47.3% and 36.0% on the MultiTHUMOS and TSU datasets, which outperforms 2.7% and 2.3% on the current state-of-the-art results. The code is available at https://github.com/Yoona6371/TP-LSM.

密集多标签动作检测是视觉动作领域的一项具有挑战性的任务,因为多个动作会在不同的时间跨度内同时发生,因此准确评估动作之间的短期和长期时间依赖性对于动作检测至关重要。目前迫切需要一种有效的时空建模技术来检测视频中动作的时空依赖性,并高效地学习长期和短期动作信息。本文提出了一种基于时空金字塔和长短期时间建模的多标签动作检测新方法,该方法将分层结构与金字塔特征层次相结合,实现了密集的多标签时空动作检测。通过使用扩展和压缩卷积模块(SEC)和外部注意力进行时间建模,我们关注了每个阶段的长短期动作的时间关系。然后,我们整合了分层金字塔特征,实现了对不同时间分辨率尺度的动作的精确检测。我们在密集多标签基准数据集上评估了该模型的性能,在 MultiTHUMOS 和 TSU 数据集上的 mAP 分别为 47.3% 和 36.0%,优于目前最先进结果的 2.7% 和 2.3%。代码可在 https://github.com/Yoona6371/TP-LSM 上获取。
{"title":"TP-LSM: visual temporal pyramidal time modeling network to multi-label action detection in image-based AI","authors":"Haojie Gao, Peishun Liu, Xiaolong Ma, Zikang Yan, Ningning Ma, Wenqiang Liu, Xuefang Wang, Ruichun Tang","doi":"10.1007/s00371-024-03601-1","DOIUrl":"https://doi.org/10.1007/s00371-024-03601-1","url":null,"abstract":"<p>Dense multi-label action detection is a challenging task in the field of visual action, where multiple actions occur simultaneously in different time spans, hence accurately assessing the short-term and long-term temporal dependencies between actions is crucial for action detection. There is an urgent need for an effective temporal modeling technology to detect the temporal dependence of actions in videos and efficiently learn long-term and short-term action information. This paper proposes a new method based on temporal pyramid and long short-term time modeling for multi-label action detection, which combines hierarchical structure with pyramid feature hierarchy for dense multi-label temporal action detection. By using the expansion and compression convolution module (SEC) and external attention for time modeling, we focus on the temporal relationships of long and short-term actions at each stage. We then integrate hierarchical pyramid features to achieve accurate detection of actions at different temporal resolution scales. We evaluated the performance of the model on dense multi-label benchmark datasets, and achieved mAP of 47.3% and 36.0% on the MultiTHUMOS and TSU datasets, which outperforms 2.7% and 2.3% on the current state-of-the-art results. The code is available at https://github.com/Yoona6371/TP-LSM.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Batch-transformer for scene text image super-resolution 用于场景文本图像超分辨率的批量转换器
Pub Date : 2024-08-29 DOI: 10.1007/s00371-024-03598-7
Yaqi Sun, Xiaolan Xie, Zhi Li, Kai Yang

Recognizing low-resolution text images is challenging as they often lose their detailed information, leading to poor recognition accuracy. Moreover, the traditional methods, based on deep convolutional neural networks (CNNs), are not effective enough for some low-resolution text images with dense characters. In this paper, a novel CNN-based batch-transformer network for scene text image super-resolution (BT-STISR) method is proposed to address this problem. In order to obtain the text information for text reconstruction, a pre-trained text prior module is employed to extract text information. Then a novel two pipeline batch-transformer-based module is proposed, leveraging self-attention and global attention mechanisms to exert the guidance of text prior to the text reconstruction process. Experimental study on a benchmark dataset TextZoom shows that the proposed method BT-STISR achieves the best state-of-the-art performance in terms of structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) metrics compared to some latest methods.

识别低分辨率文本图像是一项挑战,因为低分辨率文本图像通常会丢失细节信息,导致识别准确率较低。此外,基于深度卷积神经网络(CNN)的传统方法对于一些字符密集的低分辨率文本图像不够有效。本文针对这一问题,提出了一种新颖的基于 CNN 的批处理变换器网络的场景文本图像超分辨率(BT-STISR)方法。为了获取用于文本重建的文本信息,本文采用了一个预训练的文本先验模块来提取文本信息。然后,提出了一种基于批量转换器的新型双流水线模块,利用自注意力和全局注意力机制,在文本重构过程之前对文本进行引导。在基准数据集 TextZoom 上进行的实验研究表明,与一些最新方法相比,所提出的 BT-STISR 方法在结构相似度(SSIM)和峰值信噪比(PSNR)指标方面达到了最先进的性能。
{"title":"Batch-transformer for scene text image super-resolution","authors":"Yaqi Sun, Xiaolan Xie, Zhi Li, Kai Yang","doi":"10.1007/s00371-024-03598-7","DOIUrl":"https://doi.org/10.1007/s00371-024-03598-7","url":null,"abstract":"<p>Recognizing low-resolution text images is challenging as they often lose their detailed information, leading to poor recognition accuracy. Moreover, the traditional methods, based on deep convolutional neural networks (CNNs), are not effective enough for some low-resolution text images with dense characters. In this paper, a novel CNN-based batch-transformer network for scene text image super-resolution (BT-STISR) method is proposed to address this problem. In order to obtain the text information for text reconstruction, a pre-trained text prior module is employed to extract text information. Then a novel two pipeline batch-transformer-based module is proposed, leveraging self-attention and global attention mechanisms to exert the guidance of text prior to the text reconstruction process. Experimental study on a benchmark dataset TextZoom shows that the proposed method BT-STISR achieves the best state-of-the-art performance in terms of structural similarity (SSIM) and peak signal-to-noise ratio (PSNR) metrics compared to some latest methods.\u0000</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight CNN-ViT with cross-module representational constraint for express parcel detection 采用跨模块表征约束的轻量级 CNN-ViT,用于快递包裹检测
Pub Date : 2024-08-28 DOI: 10.1007/s00371-024-03602-0
Guowei Zhang, Wuzhi Li, Yutong Tang, Shuixuan Chen, Li Wang

The express parcel(EP) detection model needs to be deployed on edge devices with limited computing capabilities, hence a lightweight and efficient object detection model is essential. In this work, we introduce a novel lightweight CNN-ViT with cross-module representational constraint designed specifically for EP detection—CMViT. In CMViT, we draw on the concept of cross-attention from multimodal models and propose a new cross-module attention(CMA) encoder. Local features are provided by the proposed lightweight shuffle block(LSBlock), and CMA encoder flexibly connects local and global features from the hybrid CNN-ViT model through self-attention, constructing a robust dependency between local and global features, thereby effectively enhancing the model’s receptive field. Furthermore, LSBlock provides effective guidance and constraints for CMA encoder, avoiding unnecessary attention to redundant information and reducing computational cost. In EP detection, compared to YOLOv8s, CMViT achieves 99% mean accuracy with a 25% input resolution, 54.5% of the parameters, and 14.7% of the FLOPs, showing superior performance and promising applications. In more challenging object detection tasks, CMViT exhibits exceptional performance, achieving 28.8 mAP and 2.2G MAdds on COCO dataset, thus outperforming MobileViT by 4% in accuracy while consuming less computational power. Code is available at: https://github.com/Acc2386/CMViT.

快递包裹(EP)检测模型需要部署在计算能力有限的边缘设备上,因此一个轻量级、高效的物体检测模型至关重要。在这项工作中,我们介绍了一种新型的轻量级 CNN-ViT,它具有跨模块表示约束,专为 EP 检测而设计--CMViT。在 CMViT 中,我们借鉴了多模态模型中交叉注意力的概念,并提出了一种新的跨模块注意力(CMA)编码器。本地特征由提出的轻量级洗牌块(LSBlock)提供,而 CMA 编码器通过自我注意灵活地连接了 CNN-ViT 混合模型的本地特征和全局特征,在本地特征和全局特征之间构建了稳健的依赖关系,从而有效地增强了模型的感受野。此外,LSBlock 还为 CMA 编码器提供了有效的指导和约束,避免了对冗余信息的不必要关注,降低了计算成本。在 EP 检测中,与 YOLOv8s 相比,CMViT 以 25% 的输入分辨率、54.5% 的参数和 14.7% 的 FLOPs 实现了 99% 的平均准确率,表现出卓越的性能和广阔的应用前景。在更具挑战性的物体检测任务中,CMViT 表现出卓越的性能,在 COCO 数据集上实现了 28.8 mAP 和 2.2G MAdds,从而在准确度上比 MobileViT 高出 4%,同时消耗更少的计算能力。代码见:https://github.com/Acc2386/CMViT。
{"title":"Lightweight CNN-ViT with cross-module representational constraint for express parcel detection","authors":"Guowei Zhang, Wuzhi Li, Yutong Tang, Shuixuan Chen, Li Wang","doi":"10.1007/s00371-024-03602-0","DOIUrl":"https://doi.org/10.1007/s00371-024-03602-0","url":null,"abstract":"<p>The express parcel(EP) detection model needs to be deployed on edge devices with limited computing capabilities, hence a lightweight and efficient object detection model is essential. In this work, we introduce a novel lightweight CNN-ViT with cross-module representational constraint designed specifically for EP detection—CMViT. In CMViT, we draw on the concept of cross-attention from multimodal models and propose a new cross-module attention(CMA) encoder. Local features are provided by the proposed lightweight shuffle block(LSBlock), and CMA encoder flexibly connects local and global features from the hybrid CNN-ViT model through self-attention, constructing a robust dependency between local and global features, thereby effectively enhancing the model’s receptive field. Furthermore, LSBlock provides effective guidance and constraints for CMA encoder, avoiding unnecessary attention to redundant information and reducing computational cost. In EP detection, compared to YOLOv8s, CMViT achieves 99% mean accuracy with a 25% input resolution, 54.5% of the parameters, and 14.7% of the FLOPs, showing superior performance and promising applications. In more challenging object detection tasks, CMViT exhibits exceptional performance, achieving 28.8 mAP and 2.2G MAdds on COCO dataset, thus outperforming MobileViT by 4% in accuracy while consuming less computational power. Code is available at: https://github.com/Acc2386/CMViT.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation of multi-scale remote sensing images with contextual feature enhancement 利用上下文特征增强对多尺度遥感图像进行语义分割
Pub Date : 2024-08-27 DOI: 10.1007/s00371-024-03419-x
Mei Zhang, Lingling Liu, Yongtao Pei, Guojing Xie, Jinghua Wen

Remote sensing images exhibit complex characteristics such as irregular multi-scale feature shapes, significant scale variations, and imbalanced sizes between different categories. These characteristics lead to a decrease in the accuracy of semantic segmentation in remote sensing images. In view of this problem, this paper presents a context feature-enhanced multi-scale remote sensing image semantic segmentation method. It utilizes a context aggregation module for global context co-aggregation, obtaining feature representations at different levels through self-similarity calculation and convolution operations. The processed features are input into a feature enhancement module, introducing a channel gate mechanism to enhance the expressive power of feature maps. This mechanism enhances feature representations by leveraging channel correlations and weighted fusion operations. Additionally, pyramid pooling is employed to capture multi-scale information from the enhanced features, so as to improve the performance and accuracy of the semantic segmentation model. Experimental results on the Vaihingen and Potsdam datasets (which are indeed publicly released at the URL: https://www.isprs.org/education/benchmarks/UrbanSemLab/Default.aspx) demonstrate significant improvements in the performance and accuracy of the proposed method (whose algorithm source code is indeed publicly released in Sect. 3.4), compared to previous multi-scale remote sensing image semantic segmentation approaches, verifying its effectiveness.

遥感图像具有复杂的特征,如不规则的多尺度特征形状、显著的尺度变化以及不同类别之间的大小不平衡。这些特征导致遥感图像语义分割的准确性降低。针对这一问题,本文提出了一种上下文特征增强型多尺度遥感图像语义分割方法。该方法利用上下文聚合模块进行全局上下文共聚合,通过自相似性计算和卷积操作获得不同层次的特征表示。处理后的特征被输入特征增强模块,引入通道门机制来增强特征图的表达能力。该机制通过利用通道相关性和加权融合操作来增强特征表示。此外,还采用了金字塔池化技术来捕捉增强特征中的多尺度信息,从而提高语义分割模型的性能和准确性。在 Vaihingen 和 Potsdam 数据集(已在 URL: https://www.isprs.org/education/benchmarks/UrbanSemLab/Default.aspx 上公开发布)上的实验结果表明,与之前的多尺度遥感图像语义分割方法相比,所提方法(其算法源代码已在第 3.4 节中公开发布)的性能和准确性有了显著提高,验证了其有效性。
{"title":"Semantic segmentation of multi-scale remote sensing images with contextual feature enhancement","authors":"Mei Zhang, Lingling Liu, Yongtao Pei, Guojing Xie, Jinghua Wen","doi":"10.1007/s00371-024-03419-x","DOIUrl":"https://doi.org/10.1007/s00371-024-03419-x","url":null,"abstract":"<p>Remote sensing images exhibit complex characteristics such as irregular multi-scale feature shapes, significant scale variations, and imbalanced sizes between different categories. These characteristics lead to a decrease in the accuracy of semantic segmentation in remote sensing images. In view of this problem, this paper presents a context feature-enhanced multi-scale remote sensing image semantic segmentation method. It utilizes a context aggregation module for global context co-aggregation, obtaining feature representations at different levels through self-similarity calculation and convolution operations. The processed features are input into a feature enhancement module, introducing a channel gate mechanism to enhance the expressive power of feature maps. This mechanism enhances feature representations by leveraging channel correlations and weighted fusion operations. Additionally, pyramid pooling is employed to capture multi-scale information from the enhanced features, so as to improve the performance and accuracy of the semantic segmentation model. Experimental results on the Vaihingen and Potsdam datasets (which are indeed publicly released at the URL: https://www.isprs.org/education/benchmarks/UrbanSemLab/Default.aspx) demonstrate significant improvements in the performance and accuracy of the proposed method (whose algorithm source code is indeed publicly released in Sect. 3.4), compared to previous multi-scale remote sensing image semantic segmentation approaches, verifying its effectiveness.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel single-stage network for accurate image restoration 用于精确图像复原的新型单级网络
Pub Date : 2024-08-26 DOI: 10.1007/s00371-024-03599-6
Hu Gao, Jing Yang, Ying Zhang, Ning Wang, Jingfan Yang, Depeng Dang

Image restoration is the task of aiming to obtain a high-quality image from a corrupt input image, such as deblurring and deraining. In image restoration, it is typically necessary to maintain a complex balance between spatial details and contextual information. Although a multi-stage network can optimally balance these competing goals and achieve significant performance, this also increases the system’s complexity. In this paper, we propose a mountain-shaped single-stage design, which achieves the performance of multi-stage networks through a plug-and-play feature fusion middleware. Specifically, we propose a plug-and-play feature fusion middleware mechanism as an information exchange component between the encoder-decoder architectural levels. It seamlessly integrates upper-layer information into the adjacent lower layer, sequentially down to the lowest layer. Finally, all information is fused into the original image resolution manipulation level. This preserves spatial details and integrates contextual information, ensuring high-quality image restoration. Simultaneously, we propose a multi-head attention middle block as a bridge between the encoder and decoder to capture more global information and surpass the limitations of the receptive field of CNNs. In order to achieve low system complexity, we removes or replaces unnecessary nonlinear activation functions. Extensive experiments demonstrate that our approach, named as M3SNet, outperforms previous state-of-the-art models while using less than half the computational costs, for several image restoration tasks, such as image deraining and deblurring. The code and the pre-trained models will be released at https://github.com/Tombs98/M3SNet.

图像复原是一项旨在从损坏的输入图像中获取高质量图像的任务,例如去模糊和去毛刺。在图像复原中,通常需要在空间细节和上下文信息之间保持复杂的平衡。虽然多级网络可以优化平衡这些相互竞争的目标,并取得显著的性能,但这也增加了系统的复杂性。在本文中,我们提出了一种山形单级设计,通过即插即用的特征融合中间件实现多级网络的性能。具体来说,我们提出了一种即插即用的特征融合中间件机制,作为编码器-解码器架构层之间的信息交换组件。它能将上层信息无缝整合到相邻的下层,并依次向下整合到最底层。最后,所有信息都被融合到原始图像分辨率处理层。这样既保留了空间细节,又整合了上下文信息,确保了高质量的图像复原。同时,我们还提出了多头注意力中间块,作为编码器和解码器之间的桥梁,以捕捉更多的全局信息,超越 CNN 感受场的限制。为了降低系统复杂性,我们删除或替换了不必要的非线性激活函数。广泛的实验证明,我们的方法(命名为 M3SNet)在图像去毛刺和去模糊等多项图像修复任务中的表现优于之前的先进模型,而计算成本却不到其一半。代码和预训练模型将在 https://github.com/Tombs98/M3SNet 上发布。
{"title":"A novel single-stage network for accurate image restoration","authors":"Hu Gao, Jing Yang, Ying Zhang, Ning Wang, Jingfan Yang, Depeng Dang","doi":"10.1007/s00371-024-03599-6","DOIUrl":"https://doi.org/10.1007/s00371-024-03599-6","url":null,"abstract":"<p>Image restoration is the task of aiming to obtain a high-quality image from a corrupt input image, such as deblurring and deraining. In image restoration, it is typically necessary to maintain a complex balance between spatial details and contextual information. Although a multi-stage network can optimally balance these competing goals and achieve significant performance, this also increases the system’s complexity. In this paper, we propose a mountain-shaped single-stage design, which achieves the performance of multi-stage networks through a plug-and-play feature fusion middleware. Specifically, we propose a plug-and-play feature fusion middleware mechanism as an information exchange component between the encoder-decoder architectural levels. It seamlessly integrates upper-layer information into the adjacent lower layer, sequentially down to the lowest layer. Finally, all information is fused into the original image resolution manipulation level. This preserves spatial details and integrates contextual information, ensuring high-quality image restoration. Simultaneously, we propose a multi-head attention middle block as a bridge between the encoder and decoder to capture more global information and surpass the limitations of the receptive field of CNNs. In order to achieve low system complexity, we removes or replaces unnecessary nonlinear activation functions. Extensive experiments demonstrate that our approach, named as M3SNet, outperforms previous state-of-the-art models while using less than half the computational costs, for several image restoration tasks, such as image deraining and deblurring. The code and the pre-trained models will be released at https://github.com/Tombs98/M3SNet.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142187205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The Visual Computer
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1