用于自适应多尺度特征表示的选择性深度注意网络

Qingbei Guo;Xiao-Jun Wu;Tianyang Xu;Tongzhen Si;Cong Hu;Jinglan Tian
{"title":"用于自适应多尺度特征表示的选择性深度注意网络","authors":"Qingbei Guo;Xiao-Jun Wu;Tianyang Xu;Tongzhen Si;Cong Hu;Jinglan Tian","doi":"10.1109/TAI.2024.3401652","DOIUrl":null,"url":null,"abstract":"Existing multiscale methods lead to a risk of just increasing the receptive field sizes while neglecting small receptive fields. Thus, it is a challenging problem to effectively construct adaptive neural networks for recognizing various spatial-scale objects. To tackle this issue, we first introduce a new attention dimension, i.e., depth, in addition to existing attentions such as channel-attention, spatial-attention, branch-attention, and self-attention. We present a novel selective depth attention network to treat multiscale objects symmetrically in various vision tasks. Specifically, the blocks within each stage of neural networks, including convolutional neural networks (CNNs), e.g., ResNet, SENet, and Res2Net, and vision transformers (ViTs), e.g., PVTv2, output the hierarchical feature maps with the same resolution but different receptive field sizes. Based on this structural property, we design a depthwise building module, namely an selective depth attention (SDA) module, including a trunk branch and a SE-like attention branch. The block outputs of the trunk branch are fused to guide their depth attention allocation through the attention branch globally. According to the proposed attention mechanism, we dynamically select different depth features, which contributes to adaptively adjusting the receptive field sizes for the variable-sized input objects. Moreover, our method is orthogonal to multiscale networks and attention networks, so-called SDA-\n<inline-formula><tex-math>$x$</tex-math></inline-formula>\nNet. Extensive experiments demonstrate that the proposed SDA method significantly improves the original performance as a lightweight and efficient plug-in on numerous computer vision tasks, e.g., image classification, object detection, and instance segmentation.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Selective Depth Attention Networks for Adaptive Multiscale Feature Representation\",\"authors\":\"Qingbei Guo;Xiao-Jun Wu;Tianyang Xu;Tongzhen Si;Cong Hu;Jinglan Tian\",\"doi\":\"10.1109/TAI.2024.3401652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing multiscale methods lead to a risk of just increasing the receptive field sizes while neglecting small receptive fields. Thus, it is a challenging problem to effectively construct adaptive neural networks for recognizing various spatial-scale objects. To tackle this issue, we first introduce a new attention dimension, i.e., depth, in addition to existing attentions such as channel-attention, spatial-attention, branch-attention, and self-attention. We present a novel selective depth attention network to treat multiscale objects symmetrically in various vision tasks. Specifically, the blocks within each stage of neural networks, including convolutional neural networks (CNNs), e.g., ResNet, SENet, and Res2Net, and vision transformers (ViTs), e.g., PVTv2, output the hierarchical feature maps with the same resolution but different receptive field sizes. Based on this structural property, we design a depthwise building module, namely an selective depth attention (SDA) module, including a trunk branch and a SE-like attention branch. The block outputs of the trunk branch are fused to guide their depth attention allocation through the attention branch globally. According to the proposed attention mechanism, we dynamically select different depth features, which contributes to adaptively adjusting the receptive field sizes for the variable-sized input objects. Moreover, our method is orthogonal to multiscale networks and attention networks, so-called SDA-\\n<inline-formula><tex-math>$x$</tex-math></inline-formula>\\nNet. Extensive experiments demonstrate that the proposed SDA method significantly improves the original performance as a lightweight and efficient plug-in on numerous computer vision tasks, e.g., image classification, object detection, and instance segmentation.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10531158/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10531158/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的多尺度方法有可能只增加感受野的大小,而忽略小的感受野。因此,如何有效构建识别各种空间尺度物体的自适应神经网络是一个具有挑战性的问题。为了解决这个问题,我们首先引入了一个新的注意维度,即深度,以及现有的注意维度,如通道注意、空间注意、分支注意和自我注意。我们提出了一种新颖的选择性深度注意网络,用于在各种视觉任务中对称地处理多尺度物体。具体来说,神经网络(包括卷积神经网络(CNN),如 ResNet、SENet 和 Res2Net)和视觉转换器(ViT),如 PVTv2)每个阶段内的区块都会输出分辨率相同但感受野大小不同的分层特征图。基于这一结构特性,我们设计了一个深度构建模块,即选择性深度注意(SDA)模块,包括一个主干分支和一个类 SE 注意分支。主干分支的块输出被融合在一起,通过注意力分支全局性地指导其深度注意力分配。根据所提出的注意机制,我们动态选择不同的深度特征,这有助于自适应地调整输入对象的感受野大小。此外,我们的方法与多尺度网络和注意力网络,即所谓的 SDA-$x$Net 是正交的。广泛的实验证明,作为一种轻量级、高效的插件,所提出的 SDA 方法在众多计算机视觉任务(如图像分类、物体检测和实例分割)中显著提高了原始性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Selective Depth Attention Networks for Adaptive Multiscale Feature Representation
Existing multiscale methods lead to a risk of just increasing the receptive field sizes while neglecting small receptive fields. Thus, it is a challenging problem to effectively construct adaptive neural networks for recognizing various spatial-scale objects. To tackle this issue, we first introduce a new attention dimension, i.e., depth, in addition to existing attentions such as channel-attention, spatial-attention, branch-attention, and self-attention. We present a novel selective depth attention network to treat multiscale objects symmetrically in various vision tasks. Specifically, the blocks within each stage of neural networks, including convolutional neural networks (CNNs), e.g., ResNet, SENet, and Res2Net, and vision transformers (ViTs), e.g., PVTv2, output the hierarchical feature maps with the same resolution but different receptive field sizes. Based on this structural property, we design a depthwise building module, namely an selective depth attention (SDA) module, including a trunk branch and a SE-like attention branch. The block outputs of the trunk branch are fused to guide their depth attention allocation through the attention branch globally. According to the proposed attention mechanism, we dynamically select different depth features, which contributes to adaptively adjusting the receptive field sizes for the variable-sized input objects. Moreover, our method is orthogonal to multiscale networks and attention networks, so-called SDA- $x$ Net. Extensive experiments demonstrate that the proposed SDA method significantly improves the original performance as a lightweight and efficient plug-in on numerous computer vision tasks, e.g., image classification, object detection, and instance segmentation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
Table of Contents Front Cover IEEE Transactions on Artificial Intelligence Publication Information Front Cover Table of Contents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1