A Multimodal Fusion Approach for Human Activity Recognition.

IF 6.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal of Neural Systems Pub Date : 2023-01-01 DOI:10.1142/S0129065723500028
Dimitrios Koutrintzes, Evaggelos Spyrou, Eirini Mathe, Phivos Mylonas
{"title":"A Multimodal Fusion Approach for Human Activity Recognition.","authors":"Dimitrios Koutrintzes,&nbsp;Evaggelos Spyrou,&nbsp;Eirini Mathe,&nbsp;Phivos Mylonas","doi":"10.1142/S0129065723500028","DOIUrl":null,"url":null,"abstract":"<p><p>The problem of human activity recognition (HAR) has been increasingly attracting the efforts of the research community, having several applications. It consists of recognizing human motion and/or behavior within a given image or a video sequence, using as input raw sensor measurements. In this paper, a multimodal approach addressing the task of video-based HAR is proposed. It is based on 3D visual data that are collected using an RGB + depth camera, resulting to both raw video and 3D skeletal sequences. These data are transformed into six different 2D image representations; four of them are in the spectral domain, another is a pseudo-colored image. The aforementioned representations are based on skeletal data. The last representation is a \"dynamic\" image which is actually an artificially created image that summarizes RGB data of the whole video sequence, in a visually comprehensible way. In order to classify a given activity video, first, all the aforementioned 2D images are extracted and then six trained convolutional neural networks are used so as to extract visual features. The latter are fused so as to form a single feature vector and are fed into a support vector machine for classification into human activities. For evaluation purposes, a challenging motion activity recognition dataset is used, while single-view, cross-view and cross-subject experiments are performed. Moreover, the proposed approach is compared to three other state-of-the-art methods, demonstrating superior performance in most experiments.</p>","PeriodicalId":50305,"journal":{"name":"International Journal of Neural Systems","volume":"33 1","pages":"2350002"},"PeriodicalIF":6.6000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Neural Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/S0129065723500028","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

Abstract

The problem of human activity recognition (HAR) has been increasingly attracting the efforts of the research community, having several applications. It consists of recognizing human motion and/or behavior within a given image or a video sequence, using as input raw sensor measurements. In this paper, a multimodal approach addressing the task of video-based HAR is proposed. It is based on 3D visual data that are collected using an RGB + depth camera, resulting to both raw video and 3D skeletal sequences. These data are transformed into six different 2D image representations; four of them are in the spectral domain, another is a pseudo-colored image. The aforementioned representations are based on skeletal data. The last representation is a "dynamic" image which is actually an artificially created image that summarizes RGB data of the whole video sequence, in a visually comprehensible way. In order to classify a given activity video, first, all the aforementioned 2D images are extracted and then six trained convolutional neural networks are used so as to extract visual features. The latter are fused so as to form a single feature vector and are fed into a support vector machine for classification into human activities. For evaluation purposes, a challenging motion activity recognition dataset is used, while single-view, cross-view and cross-subject experiments are performed. Moreover, the proposed approach is compared to three other state-of-the-art methods, demonstrating superior performance in most experiments.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人类活动识别的多模态融合方法。
人类活动识别(HAR)问题越来越受到研究界的关注,并具有多种应用。它包括在给定图像或视频序列中识别人类运动和/或行为,使用原始传感器测量值作为输入。本文提出了一种多模态方法来解决基于视频的HAR任务。它是基于使用RGB +深度相机收集的3D视觉数据,从而产生原始视频和3D骨骼序列。这些数据被转换成六种不同的二维图像表示;其中四个是光谱域图像,另一个是伪彩色图像。上述表示基于骨架数据。最后一种表示是“动态”图像,它实际上是一种人工生成的图像,它以视觉上可理解的方式总结了整个视频序列的RGB数据。为了对给定的活动视频进行分类,首先提取所有上述二维图像,然后使用六个训练好的卷积神经网络提取视觉特征。后者被融合成一个单一的特征向量,并被输入到支持向量机中分类为人类活动。为了评估目的,使用了具有挑战性的运动活动识别数据集,同时进行了单视图,跨视图和跨主题实验。此外,将所提出的方法与其他三种最先进的方法进行了比较,在大多数实验中显示出优越的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Neural Systems
International Journal of Neural Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
28.80%
发文量
116
审稿时长
24 months
期刊介绍: The International Journal of Neural Systems is a monthly, rigorously peer-reviewed transdisciplinary journal focusing on information processing in both natural and artificial neural systems. Special interests include machine learning, computational neuroscience and neurology. The journal prioritizes innovative, high-impact articles spanning multiple fields, including neurosciences and computer science and engineering. It adopts an open-minded approach to this multidisciplinary field, serving as a platform for novel ideas and enhanced understanding of collective and cooperative phenomena in computationally capable systems.
期刊最新文献
Epileptic Seizure Detection with an End-to-end Temporal Convolutional Network and Bidirectional Long Short-Term Memory Model A graph-based neural approach to linear sum assignment problems Automated Quality Evaluation of Large-Scale Benchmark Datasets for Vision-Language Tasks sEMG-based Inter-Session Hand Gesture Recognition via Domain Adaptation with Locality Preserving and Maximum Margin Cultural Differences in the Assessment of Synthetic Voices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1