用于高动态复杂场景中自适应背景初始化的共现时空模型

IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Signal Processing-Image Communication Pub Date : 2023-09-20 DOI:10.1016/j.image.2023.117056
Wenjun Zhou , Yuheng Deng , Bo Peng , Sheng Xiang , Shun’ichi Kaneko
{"title":"用于高动态复杂场景中自适应背景初始化的共现时空模型","authors":"Wenjun Zhou ,&nbsp;Yuheng Deng ,&nbsp;Bo Peng ,&nbsp;Sheng Xiang ,&nbsp;Shun’ichi Kaneko","doi":"10.1016/j.image.2023.117056","DOIUrl":null,"url":null,"abstract":"<div><p><span>Background information is an important aspect of pre-processing for advanced applications in computer vision<span>. The literature has made rapid progress in background initialization. However, background initialization still suffers from high-dynamic complex scenes, such as illumination change, background motion, or camera jitter. Therefore, this study presents a novel Co-occurrence Spatial–Temporal (CoST) model for background initialization in high-dynamic complex scenes. CoST achieves a spatial–temporal model through a co-occurrence pixel-block structure. The proposed approach extracts the spatial–temporal information of pixels to self-adaptively generate the background without the influence of high-dynamic complex scenes. The efficiency of CoST is verified through experimental results compared with state-of-the-art algorithms. The source code of CoST is available online at: </span></span><span>https://github.com/HelloMrDeng/CoST.git</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"119 ","pages":"Article 117056"},"PeriodicalIF":3.4000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Co-occurrence spatial–temporal model for adaptive background initialization in high-dynamic complex scenes\",\"authors\":\"Wenjun Zhou ,&nbsp;Yuheng Deng ,&nbsp;Bo Peng ,&nbsp;Sheng Xiang ,&nbsp;Shun’ichi Kaneko\",\"doi\":\"10.1016/j.image.2023.117056\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>Background information is an important aspect of pre-processing for advanced applications in computer vision<span>. The literature has made rapid progress in background initialization. However, background initialization still suffers from high-dynamic complex scenes, such as illumination change, background motion, or camera jitter. Therefore, this study presents a novel Co-occurrence Spatial–Temporal (CoST) model for background initialization in high-dynamic complex scenes. CoST achieves a spatial–temporal model through a co-occurrence pixel-block structure. The proposed approach extracts the spatial–temporal information of pixels to self-adaptively generate the background without the influence of high-dynamic complex scenes. The efficiency of CoST is verified through experimental results compared with state-of-the-art algorithms. The source code of CoST is available online at: </span></span><span>https://github.com/HelloMrDeng/CoST.git</span><svg><path></path></svg>.</p></div>\",\"PeriodicalId\":49521,\"journal\":{\"name\":\"Signal Processing-Image Communication\",\"volume\":\"119 \",\"pages\":\"Article 117056\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2023-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing-Image Communication\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0923596523001388\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing-Image Communication","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0923596523001388","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 1

摘要

背景信息是计算机视觉高级应用预处理的一个重要方面。文献在背景初始化方面取得了快速进展。然而,背景初始化仍然受到高动态复杂场景的影响,例如照明变化、背景运动或相机抖动。因此,本研究提出了一种新的共现时空(CoST)模型,用于高动态复杂场景中的背景初始化。CoST通过共现像素块结构实现了时空模型。该方法提取像素的时空信息,在不受高动态复杂场景影响的情况下自适应生成背景。通过与最先进算法的比较实验结果验证了CoST的有效性。CoST的源代码可在线访问:https://github.com/HelloMrDeng/CoST.git.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Co-occurrence spatial–temporal model for adaptive background initialization in high-dynamic complex scenes

Background information is an important aspect of pre-processing for advanced applications in computer vision. The literature has made rapid progress in background initialization. However, background initialization still suffers from high-dynamic complex scenes, such as illumination change, background motion, or camera jitter. Therefore, this study presents a novel Co-occurrence Spatial–Temporal (CoST) model for background initialization in high-dynamic complex scenes. CoST achieves a spatial–temporal model through a co-occurrence pixel-block structure. The proposed approach extracts the spatial–temporal information of pixels to self-adaptively generate the background without the influence of high-dynamic complex scenes. The efficiency of CoST is verified through experimental results compared with state-of-the-art algorithms. The source code of CoST is available online at: https://github.com/HelloMrDeng/CoST.git.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Signal Processing-Image Communication
Signal Processing-Image Communication 工程技术-工程:电子与电气
CiteScore
8.40
自引率
2.90%
发文量
138
审稿时长
5.2 months
期刊介绍: Signal Processing: Image Communication is an international journal for the development of the theory and practice of image communication. Its primary objectives are the following: To present a forum for the advancement of theory and practice of image communication. To stimulate cross-fertilization between areas similar in nature which have traditionally been separated, for example, various aspects of visual communications and information systems. To contribute to a rapid information exchange between the industrial and academic environments. The editorial policy and the technical content of the journal are the responsibility of the Editor-in-Chief, the Area Editors and the Advisory Editors. The Journal is self-supporting from subscription income and contains a minimum amount of advertisements. Advertisements are subject to the prior approval of the Editor-in-Chief. The journal welcomes contributions from every country in the world. Signal Processing: Image Communication publishes articles relating to aspects of the design, implementation and use of image communication systems. The journal features original research work, tutorial and review articles, and accounts of practical developments. Subjects of interest include image/video coding, 3D video representations and compression, 3D graphics and animation compression, HDTV and 3DTV systems, video adaptation, video over IP, peer-to-peer video networking, interactive visual communication, multi-user video conferencing, wireless video broadcasting and communication, visual surveillance, 2D and 3D image/video quality measures, pre/post processing, video restoration and super-resolution, multi-camera video analysis, motion analysis, content-based image/video indexing and retrieval, face and gesture processing, video synthesis, 2D and 3D image/video acquisition and display technologies, architectures for image/video processing and communication.
期刊最新文献
SES-ReNet: Lightweight deep learning model for human detection in hazy weather conditions HOI-V: One-stage human-object interaction detection based on multi-feature fusion in videos Text in the dark: Extremely low-light text image enhancement High efficiency deep image compression via channel-wise scale adaptive latent representation learning Double supervision for scene text detection and recognition based on BMINet
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1