{"title":"基于特征金字塔网络的多尺度预测语义分割","authors":"Q. V. Toan, Min Young Kim","doi":"10.1109/ICUFN57995.2023.10199608","DOIUrl":null,"url":null,"abstract":"Semantic segmentation is a complex topic where they assign each pixel of an image with a corresponding class and demand accuracy at objective boundaries. The method plays a vital role in scene-understanding scenarios. For self-driving applications, the input source includes various types of objects such as trucks, people, or traffic signs. One receptive field is only effective in capturing a short range of sizes. Feature pyramid network (FPN) utilizes different fields of view to extract information from the input. The FPN approach obtains the spatial information from the high-resolution feature map and the semantic information from the lower scales. The final feature representation contains coarse and fine details, but it has some drawbacks. They burden the system with extensive computation and reduce the semantic information. In this paper, we devise an effective multiscale predictions network (MPNet) to address these issues. A multiscale pyramid of predictions effectively processes the prominent characteristics of each feature. A pair of adjacent features is combined together to predict the output separately. A lower-scale feature of each prediction is assigned as the contextual contributor, and the other provides coarser information. The contextual branch is passed through the atrous spatial pyramid pooling to improve performance. The segmentation scores are fused to obtain advantages from all predictions. The model is validated by a series of experiments on open data sets. We have achieved good results 76.5% mIoU at 50 FPS on Cityscapes and 43.9% mIoU on Mapillary Vistas.","PeriodicalId":341881,"journal":{"name":"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MPNet: Multiscale predictions based on feature pyramid network for semantic segmentation\",\"authors\":\"Q. V. Toan, Min Young Kim\",\"doi\":\"10.1109/ICUFN57995.2023.10199608\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Semantic segmentation is a complex topic where they assign each pixel of an image with a corresponding class and demand accuracy at objective boundaries. The method plays a vital role in scene-understanding scenarios. For self-driving applications, the input source includes various types of objects such as trucks, people, or traffic signs. One receptive field is only effective in capturing a short range of sizes. Feature pyramid network (FPN) utilizes different fields of view to extract information from the input. The FPN approach obtains the spatial information from the high-resolution feature map and the semantic information from the lower scales. The final feature representation contains coarse and fine details, but it has some drawbacks. They burden the system with extensive computation and reduce the semantic information. In this paper, we devise an effective multiscale predictions network (MPNet) to address these issues. A multiscale pyramid of predictions effectively processes the prominent characteristics of each feature. A pair of adjacent features is combined together to predict the output separately. A lower-scale feature of each prediction is assigned as the contextual contributor, and the other provides coarser information. The contextual branch is passed through the atrous spatial pyramid pooling to improve performance. The segmentation scores are fused to obtain advantages from all predictions. The model is validated by a series of experiments on open data sets. We have achieved good results 76.5% mIoU at 50 FPS on Cityscapes and 43.9% mIoU on Mapillary Vistas.\",\"PeriodicalId\":341881,\"journal\":{\"name\":\"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)\",\"volume\":\"130 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICUFN57995.2023.10199608\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICUFN57995.2023.10199608","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MPNet: Multiscale predictions based on feature pyramid network for semantic segmentation
Semantic segmentation is a complex topic where they assign each pixel of an image with a corresponding class and demand accuracy at objective boundaries. The method plays a vital role in scene-understanding scenarios. For self-driving applications, the input source includes various types of objects such as trucks, people, or traffic signs. One receptive field is only effective in capturing a short range of sizes. Feature pyramid network (FPN) utilizes different fields of view to extract information from the input. The FPN approach obtains the spatial information from the high-resolution feature map and the semantic information from the lower scales. The final feature representation contains coarse and fine details, but it has some drawbacks. They burden the system with extensive computation and reduce the semantic information. In this paper, we devise an effective multiscale predictions network (MPNet) to address these issues. A multiscale pyramid of predictions effectively processes the prominent characteristics of each feature. A pair of adjacent features is combined together to predict the output separately. A lower-scale feature of each prediction is assigned as the contextual contributor, and the other provides coarser information. The contextual branch is passed through the atrous spatial pyramid pooling to improve performance. The segmentation scores are fused to obtain advantages from all predictions. The model is validated by a series of experiments on open data sets. We have achieved good results 76.5% mIoU at 50 FPS on Cityscapes and 43.9% mIoU on Mapillary Vistas.