基于递归神经网络的低复杂度视频分类

Ifat Abramovich, Tomer Ben-Yehuda, R. Cohen
{"title":"基于递归神经网络的低复杂度视频分类","authors":"Ifat Abramovich, Tomer Ben-Yehuda, R. Cohen","doi":"10.1109/ICSEE.2018.8646076","DOIUrl":null,"url":null,"abstract":"Deep learning has led to great successes in computer vision tasks such as image classification. This is mostly attributed to the availability of large image datasets such as ImageNet. However, the progress in video classification has been slower, especially due to the small size of available video datasets and larger computational and memory demands. To promote innovation and advancement in this field, Google announced the YouTube-8M dataset in 2016, which is a public video dataset containing about 8-million tagged videos. In this paper, we train several deep neural networks for video classification on a subset of YouTube-8M. Our approach is based on extracting frame-level features using the Inception-v3 network, which are later used by recurrent neural networks with LSTM/BiLSTM units for video classification. We focus on network architectures with low computational requirements and present a detailed performance comparison. We show that for 5 categories, more than 96% of the videos are labeled correctly, where for 10 categories more than 89% of the videos are labeled correctly. We demonstrate that transfer learning leads to substantial saving in training time, while offering good results.","PeriodicalId":254455,"journal":{"name":"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Low-Complexity Video Classification using Recurrent Neural Networks\",\"authors\":\"Ifat Abramovich, Tomer Ben-Yehuda, R. Cohen\",\"doi\":\"10.1109/ICSEE.2018.8646076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning has led to great successes in computer vision tasks such as image classification. This is mostly attributed to the availability of large image datasets such as ImageNet. However, the progress in video classification has been slower, especially due to the small size of available video datasets and larger computational and memory demands. To promote innovation and advancement in this field, Google announced the YouTube-8M dataset in 2016, which is a public video dataset containing about 8-million tagged videos. In this paper, we train several deep neural networks for video classification on a subset of YouTube-8M. Our approach is based on extracting frame-level features using the Inception-v3 network, which are later used by recurrent neural networks with LSTM/BiLSTM units for video classification. We focus on network architectures with low computational requirements and present a detailed performance comparison. We show that for 5 categories, more than 96% of the videos are labeled correctly, where for 10 categories more than 89% of the videos are labeled correctly. We demonstrate that transfer learning leads to substantial saving in training time, while offering good results.\",\"PeriodicalId\":254455,\"journal\":{\"name\":\"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)\",\"volume\":\"153 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSEE.2018.8646076\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSEE.2018.8646076","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

深度学习在图像分类等计算机视觉任务中取得了巨大成功。这主要归功于像ImageNet这样的大型图像数据集的可用性。然而,视频分类的进展一直较慢,特别是由于可用视频数据集的规模较小以及对计算和内存的需求较大。为了推动这一领域的创新和进步,谷歌在2016年公布了YouTube-8M数据集,这是一个包含约800万个标记视频的公共视频数据集。在本文中,我们在YouTube-8M的一个子集上训练了几个用于视频分类的深度神经网络。我们的方法是基于使用Inception-v3网络提取帧级特征,这些特征后来被带有LSTM/BiLSTM单元的递归神经网络用于视频分类。我们关注低计算需求的网络架构,并给出了详细的性能比较。我们发现,对于5个类别,超过96%的视频被正确标记,而对于10个类别,超过89%的视频被正确标记。我们证明了迁移学习可以节省大量的训练时间,同时提供良好的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Low-Complexity Video Classification using Recurrent Neural Networks
Deep learning has led to great successes in computer vision tasks such as image classification. This is mostly attributed to the availability of large image datasets such as ImageNet. However, the progress in video classification has been slower, especially due to the small size of available video datasets and larger computational and memory demands. To promote innovation and advancement in this field, Google announced the YouTube-8M dataset in 2016, which is a public video dataset containing about 8-million tagged videos. In this paper, we train several deep neural networks for video classification on a subset of YouTube-8M. Our approach is based on extracting frame-level features using the Inception-v3 network, which are later used by recurrent neural networks with LSTM/BiLSTM units for video classification. We focus on network architectures with low computational requirements and present a detailed performance comparison. We show that for 5 categories, more than 96% of the videos are labeled correctly, where for 10 categories more than 89% of the videos are labeled correctly. We demonstrate that transfer learning leads to substantial saving in training time, while offering good results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Robust Motion Compensation for Forensic Analysis of Egocentric Video using Joint Stabilization and Tracking DC low current Hall effect measurements Examining Change Detection Methods For Hyperspectral Data Effect of Reverberation in Speech-based Emotion Recognition Traveling-Wave Ring Oscillator – Simulations and Prototype Measurements for a New Architecture for a Transmission Line Based Oscillator
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1