Mengshun Hu, Liang Liao, Jing Xiao, Lin Gu, S. Satoh
{"title":"Motion Feedback Design for Video Frame Interpolation","authors":"Mengshun Hu, Liang Liao, Jing Xiao, Lin Gu, S. Satoh","doi":"10.1109/ICASSP40776.2020.9053223","DOIUrl":null,"url":null,"abstract":"This paper introduces a feedback-based approach to interpolate video frames involving small and fast-moving objects. Unlike the existing feedforward-based methods that estimate optical flow and synthesize in-between frames sequentially, we introduce a motion-oriented component that adds a feedback block to the existing multi-scale autoencoder pipeline, which feedbacks information of small objects shared between architectures of two different scales. We show that feeding this additional information enables more robust detection of optical flow caused by small objects in fast motion. Using experiments on various datasets, we show that the feedback mechanism allows our method to achieve state-of-the-art results, both qualitatively and quantitatively.","PeriodicalId":13127,"journal":{"name":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"2016 1","pages":"4347-4351"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP40776.2020.9053223","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
This paper introduces a feedback-based approach to interpolate video frames involving small and fast-moving objects. Unlike the existing feedforward-based methods that estimate optical flow and synthesize in-between frames sequentially, we introduce a motion-oriented component that adds a feedback block to the existing multi-scale autoencoder pipeline, which feedbacks information of small objects shared between architectures of two different scales. We show that feeding this additional information enables more robust detection of optical flow caused by small objects in fast motion. Using experiments on various datasets, we show that the feedback mechanism allows our method to achieve state-of-the-art results, both qualitatively and quantitatively.