Recognition of human-human interaction using CWDTW

T. Subetha, S. Chitrakala
{"title":"Recognition of human-human interaction using CWDTW","authors":"T. Subetha, S. Chitrakala","doi":"10.1109/ICCPCT.2016.7530365","DOIUrl":null,"url":null,"abstract":"Understanding the activities of human is a challenging task in Computer Vision. Identifying the activities of human from videos and predicting their activity class label is the key functionality of Human Activity Recognition system. In general the major issues of Human activity recognition system is to identify the activities with or without the concurrent movement of body parts, occlusion, incremental learning etc. Among these issues the major difficulty lies in detecting the activity of human performing the interactions with or without the concurrent movement of their body parts. This paper aims in resolving the aforementioned problem. Here, the frames are extracted from the videos using the conventional frame extraction techniques. A pixel-based Local Binary Similarity Pattern background subtraction algorithm is used to detect the foreground from the extracted frames. The features are extracted from the detected foreground using Histogram of oriented Gradients and pyramidal feature extraction technique. A 20-point Microsoft human kinematic model is constructed using the set of features present in the frame and supervised temporal-stochastic neighbor embedding is applied to transform a high dimensional data to a low dimensional data. K-means clustering is then applied to produce a bag of key poses. The classifier Constrained Weighted Dynamic Time Warping(CWDTW) is used for the final generation of activity class label. Experimental results show the higher recognition rate achieved for various interactions with the benchmarking datasets such as Kinect Interaction dataset and Gaming dataset.","PeriodicalId":431894,"journal":{"name":"2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCPCT.2016.7530365","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Understanding the activities of human is a challenging task in Computer Vision. Identifying the activities of human from videos and predicting their activity class label is the key functionality of Human Activity Recognition system. In general the major issues of Human activity recognition system is to identify the activities with or without the concurrent movement of body parts, occlusion, incremental learning etc. Among these issues the major difficulty lies in detecting the activity of human performing the interactions with or without the concurrent movement of their body parts. This paper aims in resolving the aforementioned problem. Here, the frames are extracted from the videos using the conventional frame extraction techniques. A pixel-based Local Binary Similarity Pattern background subtraction algorithm is used to detect the foreground from the extracted frames. The features are extracted from the detected foreground using Histogram of oriented Gradients and pyramidal feature extraction technique. A 20-point Microsoft human kinematic model is constructed using the set of features present in the frame and supervised temporal-stochastic neighbor embedding is applied to transform a high dimensional data to a low dimensional data. K-means clustering is then applied to produce a bag of key poses. The classifier Constrained Weighted Dynamic Time Warping(CWDTW) is used for the final generation of activity class label. Experimental results show the higher recognition rate achieved for various interactions with the benchmarking datasets such as Kinect Interaction dataset and Gaming dataset.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用CWDTW识别人与人之间的互动
在计算机视觉中,理解人类的活动是一项具有挑战性的任务。从视频中识别人的活动并预测其活动类别标签是人体活动识别系统的关键功能。一般来说,人体活动识别系统的主要问题是识别有或没有身体部位的同步运动、遮挡、增量学习等活动。在这些问题中,主要的困难在于检测人类在有或没有身体部位同步运动的情况下进行交互的活动。本文旨在解决上述问题。在这里,使用传统的帧提取技术从视频中提取帧。采用基于像素的局部二值相似模式背景相减算法从提取的帧中检测前景。利用定向梯度直方图和锥体特征提取技术从检测到的前景中提取特征。利用框架中存在的特征集构建了一个20点的Microsoft人体运动学模型,并应用监督时间随机邻居嵌入将高维数据转换为低维数据。然后应用K-means聚类来生成一个关键姿势包。分类器约束加权动态时间翘曲(CWDTW)用于活动类标签的最终生成。实验结果表明,与Kinect交互数据集和游戏数据集等基准数据集进行各种交互,获得了更高的识别率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A study on the increasing in the performance of a solar photovoltaic cell during shading condition Design and analysis of hybrid DC-DC boost converter in continuous conduction mode Optimal control of islanded microgrid with adaptive fuzzy logic & PI controller using HBCC under various voltage & load variation Mouse behaviour based multi-factor authentication using neural networks A novel approach to maximize network life time by reducing power consumption level using CGNT model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1