首页 > 最新文献

Journal on Multimodal User Interfaces最新文献

英文 中文
Predicting multimodal presentation skills based on instance weighting domain adaptation 基于实例加权域自适应的多模态表达技巧预测
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-02-18 DOI: 10.1007/s12193-021-00367-x
Yutaro Yagi, Shogo Okada, Shota Shiobara, Sota Sugimura

Presentation skills assessment is one of the central challenges of multimodal modeling. Presentation skills are composed of verbal and nonverbal skill components, but because people demonstrate their presentation skills in a variety of manners, the observed multimodal features vary widely. Due to the differences in features, when test data samples are generated on different training data sample distributions, in many cases, the prediction accuracy of the skills degrades. In machine learning theory, this problem in which training (source) data are biased is known as instance selection bias or covariate shift. To solve this problem, this paper presents an instance weighting adaptation method that is applied to estimate the presentation skills of each participant from multimodal (verbal and nonverbal) features. For this purpose, we collect a novel multimodal presentation dataset that includes audio signal data, body motion sensor data, and text data of the speech content for participants observed in 58 presentation sessions. The dataset also includes both verbal and nonverbal presentation skills, which are assessed by two external experts from a human resources department. We extract multimodal features, such as spoken utterances, acoustic features, and the amount of body motion, to estimate the presentation skills. We propose two approaches, early fusing and late fusing, for the regression models based on multimodal instance weighting adaptation. The experimental results show that the early fusing regression model with instance weighting adaptation achieved (rho =0.39) for the Pearson correlation, which presents the regression accuracy for the clarity of presentation goal elements. In the maximum case, the accuracy (correlation coefficient) is improved from (-0.34) to +0.35 by instance weighting adaptation.

表达技能评估是多模态建模的核心挑战之一。演讲技巧由语言和非语言技巧组成,但由于人们以各种方式展示他们的演讲技巧,因此观察到的多模态特征差异很大。由于特征的差异,当在不同的训练数据样本分布上生成测试数据样本时,在很多情况下,技能的预测精度会降低。在机器学习理论中,这种训练(源)数据有偏差的问题被称为实例选择偏差或协变量移位。为了解决这一问题,本文提出了一种实例加权自适应方法,通过多模态(言语和非言语)特征来估计每个参与者的表达能力。为此,我们收集了一个新的多模态演示数据集,其中包括58个演示会议中参与者的音频信号数据、身体运动传感器数据和演讲内容的文本数据。该数据集还包括口头和非口头表达技巧,由人力资源部门的两名外部专家进行评估。我们提取多模态特征,如语音、声学特征和身体运动的数量,以估计演示技巧。针对基于多模态实例加权自适应的回归模型,提出了早期融合和后期融合两种方法。实验结果表明,基于实例加权自适应的早期融合回归模型在Pearson相关性上达到(rho =0.39),对表示目标元素的清晰度有较好的回归精度。在最大的情况下,通过实例加权适应,精度(相关系数)从(-0.34)提高到+0.35。
{"title":"Predicting multimodal presentation skills based on instance weighting domain adaptation","authors":"Yutaro Yagi, Shogo Okada, Shota Shiobara, Sota Sugimura","doi":"10.1007/s12193-021-00367-x","DOIUrl":"https://doi.org/10.1007/s12193-021-00367-x","url":null,"abstract":"<p>Presentation skills assessment is one of the central challenges of multimodal modeling. Presentation skills are composed of verbal and nonverbal skill components, but because people demonstrate their presentation skills in a variety of manners, the observed multimodal features vary widely. Due to the differences in features, when test data samples are generated on different training data sample distributions, in many cases, the prediction accuracy of the skills degrades. In machine learning theory, this problem in which training (source) data are biased is known as instance selection bias or covariate shift. To solve this problem, this paper presents an instance weighting adaptation method that is applied to estimate the presentation skills of each participant from multimodal (verbal and nonverbal) features. For this purpose, we collect a novel multimodal presentation dataset that includes audio signal data, body motion sensor data, and text data of the speech content for participants observed in 58 presentation sessions. The dataset also includes both verbal and nonverbal presentation skills, which are assessed by two external experts from a human resources department. We extract multimodal features, such as spoken utterances, acoustic features, and the amount of body motion, to estimate the presentation skills. We propose two approaches, early fusing and late fusing, for the regression models based on multimodal instance weighting adaptation. The experimental results show that the early fusing regression model with instance weighting adaptation achieved <span>(rho =0.39)</span> for the Pearson correlation, which presents the regression accuracy for the clarity of presentation goal elements. In the maximum case, the accuracy (correlation coefficient) is improved from <span>(-0.34)</span> to +0.35 by instance weighting adaptation.</p>","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"33 1","pages":""},"PeriodicalIF":2.9,"publicationDate":"2021-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138508789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks 基于图卷积网络增强的面部表情和动作单元识别
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-01-26 DOI: 10.1007/s12193-020-00363-7
Jun He, Xiaocui Yu, Bo Sun, Lejun Yu
{"title":"Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks","authors":"Jun He, Xiaocui Yu, Bo Sun, Lejun Yu","doi":"10.1007/s12193-020-00363-7","DOIUrl":"https://doi.org/10.1007/s12193-020-00363-7","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"429 - 440"},"PeriodicalIF":2.9,"publicationDate":"2021-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00363-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45781026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification 动态粒化下基于邻域决策理论粗糙集的脑机接口运动图像分类
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-01-25 DOI: 10.1007/s12193-020-00358-4
K. Renuga Devi, H. Hannah Inbarani
{"title":"Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification","authors":"K. Renuga Devi, H. Hannah Inbarani","doi":"10.1007/s12193-020-00358-4","DOIUrl":"https://doi.org/10.1007/s12193-020-00358-4","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"44 1","pages":"301 - 321"},"PeriodicalIF":2.9,"publicationDate":"2021-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72827662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games 战略交流中的心理感知比较:人-代理谈判、独裁者与最后通牒博弈
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-01-21 DOI: 10.1007/s12193-020-00356-6
Minha Lee, Gale M. Lucas, J. Gratch
{"title":"Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games","authors":"Minha Lee, Gale M. Lucas, J. Gratch","doi":"10.1007/s12193-020-00356-6","DOIUrl":"https://doi.org/10.1007/s12193-020-00356-6","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"9 1","pages":"201 - 214"},"PeriodicalIF":2.9,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00356-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms 一种基于机器学习算法的多方交互中地址检测的焦点编码方案
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-01-17 DOI: 10.1007/s12193-020-00361-9
Usman Malik, Mukesh Barange, Julien Saunier, A. Pauchet
{"title":"A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms","authors":"Usman Malik, Mukesh Barange, Julien Saunier, A. Pauchet","doi":"10.1007/s12193-020-00361-9","DOIUrl":"https://doi.org/10.1007/s12193-020-00361-9","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"175 - 188"},"PeriodicalIF":2.9,"publicationDate":"2021-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00361-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42477170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PLAAN: Pain Level Assessment with Anomaly-detection based Network PLAAN:基于异常检测网络的疼痛水平评估
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-01-06 DOI: 10.1007/s12193-020-00362-8
Yi Li, Shreya Ghosh, Jyoti Joshi
{"title":"PLAAN: Pain Level Assessment with Anomaly-detection based Network","authors":"Yi Li, Shreya Ghosh, Jyoti Joshi","doi":"10.1007/s12193-020-00362-8","DOIUrl":"https://doi.org/10.1007/s12193-020-00362-8","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"359 - 372"},"PeriodicalIF":2.9,"publicationDate":"2021-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00362-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Internet-based tailored virtual human health intervention to promote colorectal cancer screening: design guidelines from two user studies 基于互联网的定制虚拟人类健康干预以促进结直肠癌筛查:来自两项用户研究的设计指南
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-01-02 DOI: 10.1007/s12193-020-00357-5
Mohan S Zalake, F. Tavassoli, Kyle A. Duke, T. George, François Modave, J. Neil, Janice L. Krieger, B. Lok
{"title":"Internet-based tailored virtual human health intervention to promote colorectal cancer screening: design guidelines from two user studies","authors":"Mohan S Zalake, F. Tavassoli, Kyle A. Duke, T. George, François Modave, J. Neil, Janice L. Krieger, B. Lok","doi":"10.1007/s12193-020-00357-5","DOIUrl":"https://doi.org/10.1007/s12193-020-00357-5","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"147 - 162"},"PeriodicalIF":2.9,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00357-5","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43848179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Words of encouragement: how praise delivered by a social robot changes children’s mindset for learning 鼓励的话语:社交机器人的赞美如何改变孩子的学习心态
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-11-24 DOI: 10.1007/s12193-020-00353-9
Daniel P. Davison, F. Wijnen, V. Charisi, J. van der Meij, D. Reidsma, V. Evers
{"title":"Words of encouragement: how praise delivered by a social robot changes children’s mindset for learning","authors":"Daniel P. Davison, F. Wijnen, V. Charisi, J. van der Meij, D. Reidsma, V. Evers","doi":"10.1007/s12193-020-00353-9","DOIUrl":"https://doi.org/10.1007/s12193-020-00353-9","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"61 - 76"},"PeriodicalIF":2.9,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00353-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"52688114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
An audiovisual interface-based drumming system for multimodal human–robot interaction 基于视听界面的多模态人机交互击鼓系统
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-11-13 DOI: 10.1007/s12193-020-00352-w
G. Ince, R. Yorganci, A. Ozkul, Taha Berkay Duman, Hatice Köse
{"title":"An audiovisual interface-based drumming system for multimodal human–robot interaction","authors":"G. Ince, R. Yorganci, A. Ozkul, Taha Berkay Duman, Hatice Köse","doi":"10.1007/s12193-020-00352-w","DOIUrl":"https://doi.org/10.1007/s12193-020-00352-w","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"413 - 428"},"PeriodicalIF":2.9,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00352-w","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41529631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Virtual agents as supporting media for scientific presentations 虚拟代理作为科学演示的支持媒体
IF 2.9 3区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2020-11-06 DOI: 10.1007/s12193-020-00350-y
T. Bickmore, Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, H. Trinh
{"title":"Virtual agents as supporting media for scientific presentations","authors":"T. Bickmore, Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, H. Trinh","doi":"10.1007/s12193-020-00350-y","DOIUrl":"https://doi.org/10.1007/s12193-020-00350-y","url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"15 1","pages":"131 - 146"},"PeriodicalIF":2.9,"publicationDate":"2020-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s12193-020-00350-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44575952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Journal on Multimodal User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1