Junfei Shi;Shanshan Ji;Haiyan Jin;Junhuai Li;Maoguo Gong;Weisi Lin
{"title":"Content-Adaptive Multi-Region Deep Network for Polarimetric SAR Image Classification","authors":"Junfei Shi;Shanshan Ji;Haiyan Jin;Junhuai Li;Maoguo Gong;Weisi Lin","doi":"10.1109/TCSVT.2024.3456480","DOIUrl":null,"url":null,"abstract":"Deep learning methods excel in Polarimetric SAR (PolSAR) image classification. However, existing methods typically sample an image block for each pixel with a fixed-size square window, which always contains inconsistent/incomplete content with the central pixel, resulting in many misclassifications especially in boundary and heterogeneous regions. So, a size-fixed square window is not enough for representing various terrain objects. To address this issue, we develop a content-adaptive multi-region deep network to obtain contextual consistent sampling windows for diverse terrain objects. Firstly, a complex scene of PolSAR image is partitioned into homogeneous, heterogeneous and boundary regions. Then, sampling windows with adaptive direction and scale are designed for three distinct regions. Besides, windows with central and global regions are proposed to provide additional local and global information. Finally, a fusion network is designed to adaptively combine different sampling windows to enhance classification performance. Experimental results on three real data sets demonstrate that the proposed method can achieve superior performance in both edge details and heterogeneous terrain objects compared with the state-of-the-art methods.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"617-631"},"PeriodicalIF":11.1000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10669389/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning methods excel in Polarimetric SAR (PolSAR) image classification. However, existing methods typically sample an image block for each pixel with a fixed-size square window, which always contains inconsistent/incomplete content with the central pixel, resulting in many misclassifications especially in boundary and heterogeneous regions. So, a size-fixed square window is not enough for representing various terrain objects. To address this issue, we develop a content-adaptive multi-region deep network to obtain contextual consistent sampling windows for diverse terrain objects. Firstly, a complex scene of PolSAR image is partitioned into homogeneous, heterogeneous and boundary regions. Then, sampling windows with adaptive direction and scale are designed for three distinct regions. Besides, windows with central and global regions are proposed to provide additional local and global information. Finally, a fusion network is designed to adaptively combine different sampling windows to enhance classification performance. Experimental results on three real data sets demonstrate that the proposed method can achieve superior performance in both edge details and heterogeneous terrain objects compared with the state-of-the-art methods.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.