{"title":"Video object segmentation based on dynamic perception update and feature fusion","authors":"","doi":"10.1016/j.imavis.2024.105156","DOIUrl":null,"url":null,"abstract":"<div><p>The current popular video object segmentation algorithms based on memory network indiscriminately update the frame information to the memory pool, fails to make reasonable use of the historical frame information, causing frame information redundancy in the memory pool, resulting in the increase of the computation amount. At the same time, the mask refinement method is relatively rough, resulting in blurred edges of the generated mask. To solve these problems, This paper proposes a video object segmentation algorithm based on dynamic perception update and feature fusion. In order to reasonably utilize the historical frame information, a dynamic perception update module is proposed to selectively update the segmentation frame mask. Meanwhile, a mask refinement module is established to enhance the detail information of the shallow features of the backbone network. This module uses a double kernels fusion block to fuse the different scale information of the features, and finally uses the Laplacian operator to sharpen the edges of the mask. The experimental results show that on the public datasets DAVIS2016, DAVIS2017 and YouTube-VOS<sub>18</sub>, the comprehensive performance of the algorithm in this paper reaches 86.9%, 79.3% and 71.6%, respectively, and the segmentation speed reaches 15FPS on the DAVIS2016 dataset. Compared with many mainstream algorithms in recent years, it has obvious advantages in performance.</p></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":null,"pages":null},"PeriodicalIF":4.2000,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Image and Vision Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0262885624002610","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The current popular video object segmentation algorithms based on memory network indiscriminately update the frame information to the memory pool, fails to make reasonable use of the historical frame information, causing frame information redundancy in the memory pool, resulting in the increase of the computation amount. At the same time, the mask refinement method is relatively rough, resulting in blurred edges of the generated mask. To solve these problems, This paper proposes a video object segmentation algorithm based on dynamic perception update and feature fusion. In order to reasonably utilize the historical frame information, a dynamic perception update module is proposed to selectively update the segmentation frame mask. Meanwhile, a mask refinement module is established to enhance the detail information of the shallow features of the backbone network. This module uses a double kernels fusion block to fuse the different scale information of the features, and finally uses the Laplacian operator to sharpen the edges of the mask. The experimental results show that on the public datasets DAVIS2016, DAVIS2017 and YouTube-VOS18, the comprehensive performance of the algorithm in this paper reaches 86.9%, 79.3% and 71.6%, respectively, and the segmentation speed reaches 15FPS on the DAVIS2016 dataset. Compared with many mainstream algorithms in recent years, it has obvious advantages in performance.
期刊介绍:
Image and Vision Computing has as a primary aim the provision of an effective medium of interchange for the results of high quality theoretical and applied research fundamental to all aspects of image interpretation and computer vision. The journal publishes work that proposes new image interpretation and computer vision methodology or addresses the application of such methods to real world scenes. It seeks to strengthen a deeper understanding in the discipline by encouraging the quantitative comparison and performance evaluation of the proposed methodology. The coverage includes: image interpretation, scene modelling, object recognition and tracking, shape analysis, monitoring and surveillance, active vision and robotic systems, SLAM, biologically-inspired computer vision, motion analysis, stereo vision, document image understanding, character and handwritten text recognition, face and gesture recognition, biometrics, vision-based human-computer interaction, human activity and behavior understanding, data fusion from multiple sensor inputs, image databases.