DCM: A Dense-Attention Context Module For Semantic Segmentation

Shenghua Li, Quan Zhou, Jia Liu, Jie Wang, Yawen Fan, Xiaofu Wu, Longin Jan Latecki
{"title":"DCM: A Dense-Attention Context Module For Semantic Segmentation","authors":"Shenghua Li, Quan Zhou, Jia Liu, Jie Wang, Yawen Fan, Xiaofu Wu, Longin Jan Latecki","doi":"10.1109/ICIP40778.2020.9190675","DOIUrl":null,"url":null,"abstract":"For image semantic segmentation, a fully convolutional network is usually employed as the encoder to abstract visual features of the input image. A meticulously designed decoder is used to decoding the final feature map of the backbone. The output resolution of backbones which are designed for image classification task is too low to match segmentation task. Most existing methods for obtaining the final high-resolution feature map can not fully utilize the information of different layers of the backbone. To adequately extract the information of a single layer, the multi-scale context information of different layers, and the global information of backbone, we present a new attention-augmented module named Dense-attention Context Module (DCM), which is used to connect the common backbones and the other decoding heads. The experiments show the promising results of our method on Cityscapes dataset.","PeriodicalId":405734,"journal":{"name":"2020 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP40778.2020.9190675","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

For image semantic segmentation, a fully convolutional network is usually employed as the encoder to abstract visual features of the input image. A meticulously designed decoder is used to decoding the final feature map of the backbone. The output resolution of backbones which are designed for image classification task is too low to match segmentation task. Most existing methods for obtaining the final high-resolution feature map can not fully utilize the information of different layers of the backbone. To adequately extract the information of a single layer, the multi-scale context information of different layers, and the global information of backbone, we present a new attention-augmented module named Dense-attention Context Module (DCM), which is used to connect the common backbones and the other decoding heads. The experiments show the promising results of our method on Cityscapes dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DCM:一种用于语义分割的密集关注上下文模块
对于图像语义分割,通常采用全卷积网络作为编码器对输入图像的视觉特征进行抽象。精心设计的解码器用于解码主干的最终特征图。专为图像分类任务设计的主干输出分辨率太低,无法匹配图像分割任务。现有的获取最终高分辨率特征图的方法大多不能充分利用主干网不同层的信息。为了充分提取单层信息、不同层的多尺度上下文信息和主干的全局信息,我们提出了一种新的注意力增强模块——密集注意力上下文模块(DCM),用于连接公共主干和其他解码头。实验结果表明,该方法在城市景观数据集上取得了良好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Deep Adversarial Active Learning With Model Uncertainty For Image Classification Emotion Transformation Feature: Novel Feature For Deception Detection In Videos Object Segmentation In Electrical Impedance Tomography For Tactile Sensing A Syndrome-Based Autoencoder For Point Cloud Geometry Compression A Comparison Of Compressed Sensing And Dnn Based Reconstruction For Ghost Motion Imaging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1