Enhanced Action Tubelet Detector for Spatio-Temporal Video Action Detection

Yutang Wu, Hanli Wang, Shuheng Wang, Qinyu Li
{"title":"Enhanced Action Tubelet Detector for Spatio-Temporal Video Action Detection","authors":"Yutang Wu, Hanli Wang, Shuheng Wang, Qinyu Li","doi":"10.1109/ICASSP40776.2020.9054394","DOIUrl":null,"url":null,"abstract":"Current spatio-temporal action detection methods usually employ a two-stream architecture, a RGB stream for raw images and an auxiliary motion stream for optical flow. Training is required individually for each stream and more efforts are necessary to improve the precision of RGB stream. To this end, a single stream network named enhanced action tubelet (EAT) detector is proposed in this work based on RGB stream. A modulation layer is designed to modulate RGB features with conditional information from the visual clues of optical flow and human pose. This network is end-to-end and the proposed layer can be easily applied into other action detectors. Experiments show that EAT detector outperforms traditional RGB stream and is competitive to existing two-stream methods while free from the trouble of training streams separately. By being embedded in a new three-stream architecture, the resulting three-stream EAT detector achieves impressive performances among the best competitors on UCF-Sports, JHMDB and UCF-101.","PeriodicalId":13127,"journal":{"name":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"240 1","pages":"2388-2392"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICASSP40776.2020.9054394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Current spatio-temporal action detection methods usually employ a two-stream architecture, a RGB stream for raw images and an auxiliary motion stream for optical flow. Training is required individually for each stream and more efforts are necessary to improve the precision of RGB stream. To this end, a single stream network named enhanced action tubelet (EAT) detector is proposed in this work based on RGB stream. A modulation layer is designed to modulate RGB features with conditional information from the visual clues of optical flow and human pose. This network is end-to-end and the proposed layer can be easily applied into other action detectors. Experiments show that EAT detector outperforms traditional RGB stream and is competitive to existing two-stream methods while free from the trouble of training streams separately. By being embedded in a new three-stream architecture, the resulting three-stream EAT detector achieves impressive performances among the best competitors on UCF-Sports, JHMDB and UCF-101.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于时空视频动作检测的增强型动作小管检测器
当前的时空动作检测方法通常采用两流架构,原始图像的RGB流和光流的辅助运动流。每个流都需要单独训练,提高RGB流的精度需要更多的努力。为此,本文提出了一种基于RGB流的单流网络——增强动作小管(enhanced action tubelet, EAT)检测器。设计了一个调制层,利用来自光流和人体姿态的视觉线索的条件信息调制RGB特征。该网络是端到端的,所提出的层可以很容易地应用到其他动作检测器中。实验结果表明,该检测器不仅优于传统的RGB流,而且可以与现有的双流方法相竞争,同时避免了单独训练流的麻烦。通过嵌入新的三流架构,由此产生的三流EAT检测器在UCF-Sports, JHMDB和UCF-101的最佳竞争对手中取得了令人印象深刻的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Theoretical Analysis of Multi-Carrier Agile Phased Array Radar Paco and Paco-Dct: Patch Consensus and Its Application To Inpainting Array-Geometry-Aware Spatial Active Noise Control Based on Direction-of-Arrival Weighting Neural Network Wiretap Code Design for Multi-Mode Fiber Optical Channels Distributed Non-Orthogonal Pilot Design for Multi-Cell Massive Mimo Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1