Improving surface normals based action recognition in depth images

X. Nguyen, T. Nguyen, F. Charpillet
{"title":"Improving surface normals based action recognition in depth images","authors":"X. Nguyen, T. Nguyen, F. Charpillet","doi":"10.1109/AVSS.2016.7738053","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a new local descriptor for action recognition in depth images. Our proposed descriptor jointly encodes the shape and motion cues using surface normals in 4D space of depth, time, spatial coordinates and higher-order partial derivatives of depth values along spatial coordinates. In a traditional Bag-of-words (BoW) approach, local descriptors extracted from a depth sequence are encoded to form a global representation of the sequence. In our approach, local descriptors are encoded using Sparse Coding (SC) and Fisher Vector (FV), which have been recently proven effective for action recognition. Action recognition is then simply performed using a linear SVM classifier. Our proposed action descriptor is evaluated on two public benchmark datasets, MSRAction3D and MSRGesture3D. The experimental result shows the effectiveness of the proposed method on both the datasets.","PeriodicalId":438290,"journal":{"name":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AVSS.2016.7738053","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose a new local descriptor for action recognition in depth images. Our proposed descriptor jointly encodes the shape and motion cues using surface normals in 4D space of depth, time, spatial coordinates and higher-order partial derivatives of depth values along spatial coordinates. In a traditional Bag-of-words (BoW) approach, local descriptors extracted from a depth sequence are encoded to form a global representation of the sequence. In our approach, local descriptors are encoded using Sparse Coding (SC) and Fisher Vector (FV), which have been recently proven effective for action recognition. Action recognition is then simply performed using a linear SVM classifier. Our proposed action descriptor is evaluated on two public benchmark datasets, MSRAction3D and MSRGesture3D. The experimental result shows the effectiveness of the proposed method on both the datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进深度图像中基于表面法线的动作识别
本文提出了一种新的局部描述符用于深度图像的动作识别。我们提出的描述符使用深度、时间、空间坐标和深度值沿空间坐标的高阶偏导数的四维空间表面法线联合编码形状和运动线索。在传统的词袋(BoW)方法中,从深度序列中提取局部描述符进行编码以形成序列的全局表示。在我们的方法中,局部描述符使用稀疏编码(SC)和Fisher向量(FV)进行编码,这两种方法最近被证明对动作识别是有效的。然后使用线性支持向量机分类器简单地执行动作识别。我们提出的动作描述符在两个公共基准数据集MSRAction3D和MSRGesture3D上进行了评估。实验结果表明了该方法在两种数据集上的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Gender recognition from face images with trainable COSFIRE filters Time-frequency analysis for audio event detection in real scenarios Improving surface normals based action recognition in depth images Unsupervised data association for metric learning in the context of multi-shot person re-identification Tracking-based detection of driving distraction from vehicular interior video
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1