{"title":"Negative Class Guided Spatial Consistency Network for Sparsely Supervised Semantic Segmentation of Remote Sensing Images","authors":"Chen Yang;Junxiao Wang;Huixiao Meng;Shuyuan Yang;Zhixi Feng","doi":"10.1109/TCSVT.2024.3457622","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have been successfully applied in the remote sensing semantic segmentation. However, training DNNs requires a large number of densely labeled samples, which is laborious and time-consuming. Sparsely supervised semantic segmentation (SSSS) can train deep segmentation networks using only sparse annotations. In this paper, we propose a negative class guided spatial consistency network (NCG-SCNet) for semantic segmentation with sparse annotations. Specifically, we introduce a spatial consistency enhancement module (SCEM) to enhance network features by non-linearly combining spatially similar features. Thus, it could provide better representations of the boundaries and the shape of the target. Additionally, a channel compression module (CCM) is proposed to reduce channel redundancy while preserving the network’s feature extraction capability. A negative class guided loss function (NCG Loss) is constructed to provide extra supervisory information, where the negative classes are defined as the classes with lower probability in the prediction. Extensive experiments on two widely used remote sensing datasets show that the proposed NCG-SCNet outperforms the comparison methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"657-669"},"PeriodicalIF":11.1000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10671595/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep neural networks (DNNs) have been successfully applied in the remote sensing semantic segmentation. However, training DNNs requires a large number of densely labeled samples, which is laborious and time-consuming. Sparsely supervised semantic segmentation (SSSS) can train deep segmentation networks using only sparse annotations. In this paper, we propose a negative class guided spatial consistency network (NCG-SCNet) for semantic segmentation with sparse annotations. Specifically, we introduce a spatial consistency enhancement module (SCEM) to enhance network features by non-linearly combining spatially similar features. Thus, it could provide better representations of the boundaries and the shape of the target. Additionally, a channel compression module (CCM) is proposed to reduce channel redundancy while preserving the network’s feature extraction capability. A negative class guided loss function (NCG Loss) is constructed to provide extra supervisory information, where the negative classes are defined as the classes with lower probability in the prediction. Extensive experiments on two widely used remote sensing datasets show that the proposed NCG-SCNet outperforms the comparison methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.