Spatio-Temporal Convolutional Neural Network for Frame Rate Up-Conversion

Yusuke Tanaka, T. Omori
{"title":"Spatio-Temporal Convolutional Neural Network for Frame Rate Up-Conversion","authors":"Yusuke Tanaka, T. Omori","doi":"10.1145/3325773.3325777","DOIUrl":null,"url":null,"abstract":"The visual quality of the video is improved by realizing higher resolution and higher frame rate. In order to realize higher frame rate, we propose new frame rate up-conversion method using spatio-temporal convolutional neural network. In recent years, with the development of machine learning techniques such as convolutional neural networks, clearer interpolation frame estimation has been realized. However, with the conventional convolutional neural network method, it is difficult to estimate an accurate interpolation frames for video including complex motion. In order to deal with this problem, we adopted spatio-temporal convolution rather than conventional spatial convolution. Spatio-temporal convolution is thought to be effective for nonlinear motion because it can capture the time change of the motion of the object. We verified the effectiveness of the proposed method by using video data including complex motions such as rotational motion and scaling.","PeriodicalId":419017,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 3rd International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3325773.3325777","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The visual quality of the video is improved by realizing higher resolution and higher frame rate. In order to realize higher frame rate, we propose new frame rate up-conversion method using spatio-temporal convolutional neural network. In recent years, with the development of machine learning techniques such as convolutional neural networks, clearer interpolation frame estimation has been realized. However, with the conventional convolutional neural network method, it is difficult to estimate an accurate interpolation frames for video including complex motion. In order to deal with this problem, we adopted spatio-temporal convolution rather than conventional spatial convolution. Spatio-temporal convolution is thought to be effective for nonlinear motion because it can capture the time change of the motion of the object. We verified the effectiveness of the proposed method by using video data including complex motions such as rotational motion and scaling.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于时空卷积神经网络的帧率上转换
通过实现更高的分辨率和帧率,提高了视频的视觉质量。为了实现更高的帧率,我们提出了一种基于时空卷积神经网络的帧率上转换方法。近年来,随着卷积神经网络等机器学习技术的发展,实现了更清晰的插值帧估计。然而,对于包含复杂运动的视频,传统的卷积神经网络方法难以估计出准确的插值帧数。为了解决这个问题,我们采用了时空卷积而不是传统的空间卷积。时空卷积可以捕捉物体运动的时间变化,被认为是非线性运动的有效方法。我们通过包含旋转运动和缩放等复杂运动的视频数据验证了所提方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An Online Password Guessing Method Based on Big Data Feature-Weighted Fuzzy K-Modes Clustering Epilepsy Detection in EEG Signal using Recurrent Neural Network Analysis of Ant Colony Optimization on a Dynamically Changing Optical Burst Switched Network with Impairments Gaussian Process Dynamical Autoencoder Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1