Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and Toolkits

C. W. Leong, L. Chen, G. Feng, Chong Min Lee, Matthew David Mulholland
{"title":"Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and Toolkits","authors":"C. W. Leong, L. Chen, G. Feng, Chong Min Lee, Matthew David Mulholland","doi":"10.1145/2818346.2830605","DOIUrl":null,"url":null,"abstract":"Body language plays an important role in learning processes and communication. For example, communication research produced evidence that mathematical knowledge can be embodied in gestures made by teachers and students. Likewise, body postures and gestures are also utilized by speakers in oral presentations to convey ideas and important messages. Consequently, capturing and analyzing non-verbal behaviors is an important aspect in multimodal learning analytics (MLA) research. With regard to sensing capabilities, the introduction of depth sensors such as the Microsoft Kinect has greatly facilitated research and development in this area. However, the rapid advancement in hardware and software capabilities is not always in sync with the expanding set of features reported in the literature. For example, though Anvil is a widely used state-of-the-art annotation and visualization toolkit for motion traces, its motion recording component based on OpenNI is outdated. As part of our research in developing multimodal educational assessments, we began an effort to develop and standardize algorithms for purposes of multimodal feature extraction and creating automated scoring models. This paper provides an overview of relevant work in multimodal research on educational tasks, and proceeds to summarize our work using multimodal sensors in developing assessments of communication skills, with attention on the use of depth sensors. Specifically, we focus on the task of public speaking assessment using Microsoft Kinect. Additionally, we introduce an open-source Python package for computing expressive body language features from Kinect motion data, which we hope will benefit the MLA research community.","PeriodicalId":20486,"journal":{"name":"Proceedings of the 2015 ACM on International Conference on Multimodal Interaction","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2015-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 ACM on International Conference on Multimodal Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2818346.2830605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 20

Abstract

Body language plays an important role in learning processes and communication. For example, communication research produced evidence that mathematical knowledge can be embodied in gestures made by teachers and students. Likewise, body postures and gestures are also utilized by speakers in oral presentations to convey ideas and important messages. Consequently, capturing and analyzing non-verbal behaviors is an important aspect in multimodal learning analytics (MLA) research. With regard to sensing capabilities, the introduction of depth sensors such as the Microsoft Kinect has greatly facilitated research and development in this area. However, the rapid advancement in hardware and software capabilities is not always in sync with the expanding set of features reported in the literature. For example, though Anvil is a widely used state-of-the-art annotation and visualization toolkit for motion traces, its motion recording component based on OpenNI is outdated. As part of our research in developing multimodal educational assessments, we began an effort to develop and standardize algorithms for purposes of multimodal feature extraction and creating automated scoring models. This paper provides an overview of relevant work in multimodal research on educational tasks, and proceeds to summarize our work using multimodal sensors in developing assessments of communication skills, with attention on the use of depth sensors. Specifically, we focus on the task of public speaking assessment using Microsoft Kinect. Additionally, we introduce an open-source Python package for computing expressive body language features from Kinect motion data, which we hope will benefit the MLA research community.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用深度传感器分析多模态演示:硬件,软件和工具包
肢体语言在学习过程和交流中起着重要的作用。例如,传播学研究提供的证据表明,数学知识可以体现在教师和学生的手势中。同样,在口头演讲中,演讲者也利用身体姿势和手势来传达思想和重要信息。因此,捕捉和分析非语言行为是多模态学习分析研究的一个重要方面。在传感能力方面,微软Kinect等深度传感器的引入极大地促进了这一领域的研究和发展。然而,硬件和软件功能的快速发展并不总是与文献中报道的扩展特性集同步。例如,尽管Anvil是一个广泛使用的最先进的运动跟踪注释和可视化工具包,但它基于OpenNI的运动记录组件已经过时了。作为我们开发多模态教育评估研究的一部分,我们开始努力开发和标准化算法,用于多模态特征提取和创建自动评分模型。本文概述了教育任务中多模态研究的相关工作,并总结了我们在开发沟通技能评估中使用多模态传感器的工作,重点介绍了深度传感器的使用。具体来说,我们专注于使用微软Kinect进行公共演讲评估的任务。此外,我们引入了一个开源的Python包,用于从Kinect运动数据中计算富有表现力的肢体语言特征,我们希望这将使MLA研究社区受益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Multimodal Assessment of Teaching Behavior in Immersive Rehearsal Environment-TeachLivE Multimodal Capture of Teacher-Student Interactions for Automated Dialogic Analysis in Live Classrooms Retrieving Target Gestures Toward Speech Driven Animation with Meaningful Behaviors Micro-opinion Sentiment Intensity Analysis and Summarization in Online Videos Session details: Demonstrations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1