基于几何特征的人体肢体语言理解动作检测

Neha Shirbhate, K. Talele
{"title":"基于几何特征的人体肢体语言理解动作检测","authors":"Neha Shirbhate, K. Talele","doi":"10.1109/IC3I.2016.7918034","DOIUrl":null,"url":null,"abstract":"In human interaction, understanding human behaviors is a challenging problem in todays world. Action recognition has become a very important topic in detecting the emotional activity with many fundamental applications, such as in robotics, video surveillance, human-computer interaction. In this paper, we are proposing a system that uses semantic rules to define emotional activities. First, we apply morphological operation on pre-processing frame. Then by segmentation process, image is partitioned into multiple regions multiple regions which intended to extract the object. Once extract the object, action representation derives behavior of object in specific time. Using temporal and spatial properties of the objects, emotions are classified using semantics-based approach. Further the actions are classified as sitting posture and standing posture. Here, sitting posture concludes activity to be recognized as either relaxed or hands on forehead(tensed). While standing posture concludes activity recognized as loitering or fidgetting. We have opted for semantics-based approach instead of machine learning enables us to detect the actions without requiring to train the system. This also makes the system better performance-wise; and enables action detection in real time.","PeriodicalId":305971,"journal":{"name":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Human body language understanding for action detection using geometric features\",\"authors\":\"Neha Shirbhate, K. Talele\",\"doi\":\"10.1109/IC3I.2016.7918034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In human interaction, understanding human behaviors is a challenging problem in todays world. Action recognition has become a very important topic in detecting the emotional activity with many fundamental applications, such as in robotics, video surveillance, human-computer interaction. In this paper, we are proposing a system that uses semantic rules to define emotional activities. First, we apply morphological operation on pre-processing frame. Then by segmentation process, image is partitioned into multiple regions multiple regions which intended to extract the object. Once extract the object, action representation derives behavior of object in specific time. Using temporal and spatial properties of the objects, emotions are classified using semantics-based approach. Further the actions are classified as sitting posture and standing posture. Here, sitting posture concludes activity to be recognized as either relaxed or hands on forehead(tensed). While standing posture concludes activity recognized as loitering or fidgetting. We have opted for semantics-based approach instead of machine learning enables us to detect the actions without requiring to train the system. This also makes the system better performance-wise; and enables action detection in real time.\",\"PeriodicalId\":305971,\"journal\":{\"name\":\"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC3I.2016.7918034\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC3I.2016.7918034","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

在人类互动中,理解人类的行为是当今世界一个具有挑战性的问题。动作识别已成为情感活动检测中的一个重要课题,在机器人、视频监控、人机交互等领域有着广泛的应用。在本文中,我们提出了一个使用语义规则来定义情感活动的系统。首先,对预处理帧进行形态学运算。然后通过分割过程,将图像划分为多个区域,多个区域用于提取目标。动作表示是指对象在特定时间内的行为。利用对象的时间和空间属性,使用基于语义的方法对情感进行分类。进一步的动作分为坐姿和站姿。在这里,坐姿的活动被认为是放松或手放在前额(紧张)。而站立的姿势则表明活动被认为是徘徊或坐立不安。我们选择了基于语义的方法,而不是机器学习,使我们能够在不需要训练系统的情况下检测动作。这也使系统的性能更好;并实现实时动作检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Human body language understanding for action detection using geometric features
In human interaction, understanding human behaviors is a challenging problem in todays world. Action recognition has become a very important topic in detecting the emotional activity with many fundamental applications, such as in robotics, video surveillance, human-computer interaction. In this paper, we are proposing a system that uses semantic rules to define emotional activities. First, we apply morphological operation on pre-processing frame. Then by segmentation process, image is partitioned into multiple regions multiple regions which intended to extract the object. Once extract the object, action representation derives behavior of object in specific time. Using temporal and spatial properties of the objects, emotions are classified using semantics-based approach. Further the actions are classified as sitting posture and standing posture. Here, sitting posture concludes activity to be recognized as either relaxed or hands on forehead(tensed). While standing posture concludes activity recognized as loitering or fidgetting. We have opted for semantics-based approach instead of machine learning enables us to detect the actions without requiring to train the system. This also makes the system better performance-wise; and enables action detection in real time.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Single-resistance-controlled quadrature oscillator employing two current differencing buffered amplifier FMODC: Fuzzy guided multi-objective document clustering by GA A study on disruption tolerant session based mobile architecture How effective is Black Hole Algorithm? Design of a high gain 16 element array of microstrip patch antennas for millimeter wave applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1