Benchmarking Search and Annotation in Continuous Human Skeleton Sequences

J. Sedmidubský, Petr Elias, P. Zezula
{"title":"Benchmarking Search and Annotation in Continuous Human Skeleton Sequences","authors":"J. Sedmidubský, Petr Elias, P. Zezula","doi":"10.1145/3323873.3325013","DOIUrl":null,"url":null,"abstract":"Motion capture data are digital representations of human movements in form of 3D trajectories of multiple body joints. To understand the captured motions, similarity-based processing and deep learning have already proved to be effective, especially in classifying pre-segmented actions. However, in real-world scenarios motion data are typically captured as long continuous sequences, without explicit knowledge of semantic partitioning. To make such unsegmented data accessible and reusable as required by many applications, there is a strong requirement to analyze, search, annotate and mine them automatically. However, there is currently an absence of datasets and benchmarks to test and compare the capabilities of the developed techniques for continuous motion data processing. In this paper, we introduce a new large-scale LSMB19 dataset consisting of two 3D skeleton sequences of a total length of 54.5 hours. We also define a benchmark on two important multimedia retrieval operations: subsequence search and annotation. Additionally, we exemplify the usability of the benchmark by establishing baseline results for these operations.","PeriodicalId":149041,"journal":{"name":"Proceedings of the 2019 on International Conference on Multimedia Retrieval","volume":"92 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 on International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3323873.3325013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Motion capture data are digital representations of human movements in form of 3D trajectories of multiple body joints. To understand the captured motions, similarity-based processing and deep learning have already proved to be effective, especially in classifying pre-segmented actions. However, in real-world scenarios motion data are typically captured as long continuous sequences, without explicit knowledge of semantic partitioning. To make such unsegmented data accessible and reusable as required by many applications, there is a strong requirement to analyze, search, annotate and mine them automatically. However, there is currently an absence of datasets and benchmarks to test and compare the capabilities of the developed techniques for continuous motion data processing. In this paper, we introduce a new large-scale LSMB19 dataset consisting of two 3D skeleton sequences of a total length of 54.5 hours. We also define a benchmark on two important multimedia retrieval operations: subsequence search and annotation. Additionally, we exemplify the usability of the benchmark by establishing baseline results for these operations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
连续人体骨骼序列的基准检索与标注
动作捕捉数据是以人体多个关节的3D轨迹的形式对人体运动进行数字表示。为了理解捕获的动作,基于相似性的处理和深度学习已经被证明是有效的,特别是在对预分割的动作进行分类时。然而,在现实场景中,运动数据通常被捕获为长连续序列,没有明确的语义划分知识。为了使这些未分段的数据能够被许多应用程序访问和重用,需要对它们进行自动分析、搜索、注释和挖掘。然而,目前缺乏数据集和基准来测试和比较已开发的连续运动数据处理技术的能力。在本文中,我们引入了一个新的大规模LSMB19数据集,该数据集由两个总长度为54.5小时的三维骨架序列组成。我们还定义了两个重要的多媒体检索操作的基准:子序列搜索和注释。此外,我们通过为这些操作建立基线结果来举例说明基准的可用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
EAGER Multimodal Multimedia Retrieval with vitrivr RobustiQ: A Robust ANN Search Method for Billion-scale Similarity Search on GPUs Improving What Cross-Modal Retrieval Models Learn through Object-Oriented Inter- and Intra-Modal Attention Networks DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1