Coupled Generative Adversarial Network for Continuous Fine-Grained Action Segmentation

Harshala Gammulle, Tharindu Fernando, S. Denman, S. Sridharan, C. Fookes
{"title":"Coupled Generative Adversarial Network for Continuous Fine-Grained Action Segmentation","authors":"Harshala Gammulle, Tharindu Fernando, S. Denman, S. Sridharan, C. Fookes","doi":"10.1109/WACV.2019.00027","DOIUrl":null,"url":null,"abstract":"We propose a novel conditional GAN (cGAN) model for continuous fine-grained human action segmentation, that utilises multi-modal data and learned scene context information. The proposed approach utilises two GANs: termed Action GAN and Auxiliary GAN, where the Action GAN is trained to operate over the current RGB frame while the Auxiliary GAN utilises supplementary information such as depth or optical flow. The goal of both GANs is to generate similar 'action codes', a vector representation of the current action. To facilitate this process a context extractor that incorporates data and recent outputs from both modes is used to extract context information to aids recognition performance. The result is a recurrent GAN architecture which learns a task specific loss function from multiple feature modalities. Extensive evaluations on variants of the proposed model to show the importance of utilising different streams of information such as context and auxiliary information in the proposed network; and show that our model is capable of outperforming state-of-the-art methods for three widely used datasets: 50 Salads, MERL Shopping and Georgia Tech Egocentric Activities, comprising both static and dynamic camera settings.","PeriodicalId":436637,"journal":{"name":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2019.00027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

Abstract

We propose a novel conditional GAN (cGAN) model for continuous fine-grained human action segmentation, that utilises multi-modal data and learned scene context information. The proposed approach utilises two GANs: termed Action GAN and Auxiliary GAN, where the Action GAN is trained to operate over the current RGB frame while the Auxiliary GAN utilises supplementary information such as depth or optical flow. The goal of both GANs is to generate similar 'action codes', a vector representation of the current action. To facilitate this process a context extractor that incorporates data and recent outputs from both modes is used to extract context information to aids recognition performance. The result is a recurrent GAN architecture which learns a task specific loss function from multiple feature modalities. Extensive evaluations on variants of the proposed model to show the importance of utilising different streams of information such as context and auxiliary information in the proposed network; and show that our model is capable of outperforming state-of-the-art methods for three widely used datasets: 50 Salads, MERL Shopping and Georgia Tech Egocentric Activities, comprising both static and dynamic camera settings.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
连续细粒度动作分割的耦合生成对抗网络
我们提出了一种新的条件GAN (cGAN)模型,用于连续的细粒度人类动作分割,该模型利用多模态数据和学习的场景上下文信息。所提出的方法利用两种GAN:称为动作GAN和辅助GAN,其中动作GAN被训练为在当前RGB帧上操作,而辅助GAN利用诸如深度或光流等补充信息。这两种gan的目标是生成相似的“动作代码”,即当前动作的向量表示。为了促进这一过程,使用了一个上下文提取器,该提取器结合了两种模式的数据和最近的输出来提取上下文信息,以提高识别性能。结果是一个循环GAN架构,它从多个特征模态中学习任务特定的损失函数。对所建议的模型的变体进行广泛的评估,以显示在所建议的网络中利用不同信息流(例如上下文和辅助信息)的重要性;并表明我们的模型能够在三个广泛使用的数据集上优于最先进的方法:50沙拉,MERL购物和佐治亚理工学院自我中心活动,包括静态和动态相机设置。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Ancient Painting to Natural Image: A New Solution for Painting Processing GAN-Based Pose-Aware Regulation for Video-Based Person Re-Identification Coupled Generative Adversarial Network for Continuous Fine-Grained Action Segmentation Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network 3D Reconstruction and Texture Optimization Using a Sparse Set of RGB-D Cameras
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1