Fabien Racapé, D. Doshkov, Martin Köppel, P. Ndjiki-Nya
{"title":"2D+t autoregressive framework for video texture completion","authors":"Fabien Racapé, D. Doshkov, Martin Köppel, P. Ndjiki-Nya","doi":"10.1109/ICIP.2014.7025944","DOIUrl":null,"url":null,"abstract":"In this paper, an improved 2D+t texture completion framework is proposed, providing high visual quality of completed dynamic textures. A Spatiotemporal Autoregressive model (STAR) is used to propagate the signal of several available frames onto frames containing missing textures. A Gaussian white noise classically drives the model to enable texture innovation. To improve this method, an innovation process is proposed, that uses texture information from available training frames. The proposed method is deterministic, which solves a key problem for applications such as synthesis-based video coding. Compression simulations show potential bitrate savings up to 49% on texture sequences at comparable visual quality. Video results are provided online to allow assessing the visual quality of completed textures.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"7 1","pages":"4657-4661"},"PeriodicalIF":0.0000,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2014.7025944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this paper, an improved 2D+t texture completion framework is proposed, providing high visual quality of completed dynamic textures. A Spatiotemporal Autoregressive model (STAR) is used to propagate the signal of several available frames onto frames containing missing textures. A Gaussian white noise classically drives the model to enable texture innovation. To improve this method, an innovation process is proposed, that uses texture information from available training frames. The proposed method is deterministic, which solves a key problem for applications such as synthesis-based video coding. Compression simulations show potential bitrate savings up to 49% on texture sequences at comparable visual quality. Video results are provided online to allow assessing the visual quality of completed textures.