{"title":"Interaction semantic segmentation network via progressive supervised learning","authors":"","doi":"10.1007/s00138-023-01500-4","DOIUrl":null,"url":null,"abstract":"<h3>Abstract</h3> <p>Semantic segmentation requires both low-level details and high-level semantics, without losing too much detail and ensuring the speed of inference. Most existing segmentation approaches leverage low- and high-level features from pre-trained models. We propose an interaction semantic segmentation network via Progressive Supervised Learning (ISSNet). Unlike a simple fusion of two sets of features, we introduce an information interaction module to embed semantics into image details, they jointly guide the response of features in an interactive way. We develop a simple yet effective boundary refinement module to provide refined boundary features for matching corresponding semantic. We introduce a progressive supervised learning strategy throughout the training level to significantly promote network performance, not architecture level. Our proposed ISSNet shows optimal inference time. We perform extensive experiments on four datasets, including Cityscapes, HazeCityscapes, RainCityscapes and CamVid. In addition to performing better in fine weather, proposed ISSNet also performs well on rainy and foggy days. We also conduct ablation study to demonstrate the role of our proposed component. Code is available at: https://github.com/Ruini94/ISSNet</p>","PeriodicalId":51116,"journal":{"name":"Machine Vision and Applications","volume":"10 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine Vision and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00138-023-01500-4","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Semantic segmentation requires both low-level details and high-level semantics, without losing too much detail and ensuring the speed of inference. Most existing segmentation approaches leverage low- and high-level features from pre-trained models. We propose an interaction semantic segmentation network via Progressive Supervised Learning (ISSNet). Unlike a simple fusion of two sets of features, we introduce an information interaction module to embed semantics into image details, they jointly guide the response of features in an interactive way. We develop a simple yet effective boundary refinement module to provide refined boundary features for matching corresponding semantic. We introduce a progressive supervised learning strategy throughout the training level to significantly promote network performance, not architecture level. Our proposed ISSNet shows optimal inference time. We perform extensive experiments on four datasets, including Cityscapes, HazeCityscapes, RainCityscapes and CamVid. In addition to performing better in fine weather, proposed ISSNet also performs well on rainy and foggy days. We also conduct ablation study to demonstrate the role of our proposed component. Code is available at: https://github.com/Ruini94/ISSNet
期刊介绍:
Machine Vision and Applications publishes high-quality technical contributions in machine vision research and development. Specifically, the editors encourage submittals in all applications and engineering aspects of image-related computing. In particular, original contributions dealing with scientific, commercial, industrial, military, and biomedical applications of machine vision, are all within the scope of the journal.
Particular emphasis is placed on engineering and technology aspects of image processing and computer vision.
The following aspects of machine vision applications are of interest: algorithms, architectures, VLSI implementations, AI techniques and expert systems for machine vision, front-end sensing, multidimensional and multisensor machine vision, real-time techniques, image databases, virtual reality and visualization. Papers must include a significant experimental validation component.