{"title":"PPMamba:基于金字塔池化局部辅助 SSM 模型的遥感图像语义分割技术","authors":"Yin Hu, Xianping Ma, Jialu Sui, Man-On Pun","doi":"arxiv-2409.06309","DOIUrl":null,"url":null,"abstract":"Semantic segmentation is a vital task in the field of remote sensing (RS).\nHowever, conventional convolutional neural network (CNN) and transformer-based\nmodels face limitations in capturing long-range dependencies or are often\ncomputationally intensive. Recently, an advanced state space model (SSM),\nnamely Mamba, was introduced, offering linear computational complexity while\neffectively establishing long-distance dependencies. Despite their advantages,\nMamba-based methods encounter challenges in preserving local semantic\ninformation. To cope with these challenges, this paper proposes a novel network\ncalled Pyramid Pooling Mamba (PPMamba), which integrates CNN and Mamba for RS\nsemantic segmentation tasks. The core structure of PPMamba, the Pyramid\nPooling-State Space Model (PP-SSM) block, combines a local auxiliary mechanism\nwith an omnidirectional state space model (OSS) that selectively scans feature\nmaps from eight directions, capturing comprehensive feature information.\nAdditionally, the auxiliary mechanism includes pyramid-shaped convolutional\nbranches designed to extract features at multiple scales. Extensive experiments\non two widely-used datasets, ISPRS Vaihingen and LoveDA Urban, demonstrate that\nPPMamba achieves competitive performance compared to state-of-the-art models.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PPMamba: A Pyramid Pooling Local Auxiliary SSM-Based Model for Remote Sensing Image Semantic Segmentation\",\"authors\":\"Yin Hu, Xianping Ma, Jialu Sui, Man-On Pun\",\"doi\":\"arxiv-2409.06309\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Semantic segmentation is a vital task in the field of remote sensing (RS).\\nHowever, conventional convolutional neural network (CNN) and transformer-based\\nmodels face limitations in capturing long-range dependencies or are often\\ncomputationally intensive. Recently, an advanced state space model (SSM),\\nnamely Mamba, was introduced, offering linear computational complexity while\\neffectively establishing long-distance dependencies. Despite their advantages,\\nMamba-based methods encounter challenges in preserving local semantic\\ninformation. To cope with these challenges, this paper proposes a novel network\\ncalled Pyramid Pooling Mamba (PPMamba), which integrates CNN and Mamba for RS\\nsemantic segmentation tasks. The core structure of PPMamba, the Pyramid\\nPooling-State Space Model (PP-SSM) block, combines a local auxiliary mechanism\\nwith an omnidirectional state space model (OSS) that selectively scans feature\\nmaps from eight directions, capturing comprehensive feature information.\\nAdditionally, the auxiliary mechanism includes pyramid-shaped convolutional\\nbranches designed to extract features at multiple scales. Extensive experiments\\non two widely-used datasets, ISPRS Vaihingen and LoveDA Urban, demonstrate that\\nPPMamba achieves competitive performance compared to state-of-the-art models.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06309\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06309","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PPMamba: A Pyramid Pooling Local Auxiliary SSM-Based Model for Remote Sensing Image Semantic Segmentation
Semantic segmentation is a vital task in the field of remote sensing (RS).
However, conventional convolutional neural network (CNN) and transformer-based
models face limitations in capturing long-range dependencies or are often
computationally intensive. Recently, an advanced state space model (SSM),
namely Mamba, was introduced, offering linear computational complexity while
effectively establishing long-distance dependencies. Despite their advantages,
Mamba-based methods encounter challenges in preserving local semantic
information. To cope with these challenges, this paper proposes a novel network
called Pyramid Pooling Mamba (PPMamba), which integrates CNN and Mamba for RS
semantic segmentation tasks. The core structure of PPMamba, the Pyramid
Pooling-State Space Model (PP-SSM) block, combines a local auxiliary mechanism
with an omnidirectional state space model (OSS) that selectively scans feature
maps from eight directions, capturing comprehensive feature information.
Additionally, the auxiliary mechanism includes pyramid-shaped convolutional
branches designed to extract features at multiple scales. Extensive experiments
on two widely-used datasets, ISPRS Vaihingen and LoveDA Urban, demonstrate that
PPMamba achieves competitive performance compared to state-of-the-art models.