一种多模式人机交互中多感官数据融合的新方法

Yong Sun, Fang Chen, Yu Shi, Yuk Ying Chung
{"title":"一种多模式人机交互中多感官数据融合的新方法","authors":"Yong Sun, Fang Chen, Yu Shi, Yuk Ying Chung","doi":"10.1145/1228175.1228257","DOIUrl":null,"url":null,"abstract":"Multimodal User Interaction (MMUI) technology aims at building natural and intuitive interfaces allowing a user to interact with computer in a way similar to human-to-human communication, for example, through speech and gestures. As a critical component in MMUI, Multimodal Input Fusion explores ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper presents a novel approach to multi-sensory data fusion based on speech and manual deictic gesture inputs. The effectiveness of the technique has been validated through experiments, using a traffic incident management scenario where an operator interacts with a map on a large display at a distance and issues multimodal commands through speech and manual gestures. The description of the proposed approach and preliminary experiment results are presented.","PeriodicalId":164924,"journal":{"name":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","volume":"105 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"A novel method for multi-sensory data fusion in multimodal human computer interaction\",\"authors\":\"Yong Sun, Fang Chen, Yu Shi, Yuk Ying Chung\",\"doi\":\"10.1145/1228175.1228257\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multimodal User Interaction (MMUI) technology aims at building natural and intuitive interfaces allowing a user to interact with computer in a way similar to human-to-human communication, for example, through speech and gestures. As a critical component in MMUI, Multimodal Input Fusion explores ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper presents a novel approach to multi-sensory data fusion based on speech and manual deictic gesture inputs. The effectiveness of the technique has been validated through experiments, using a traffic incident management scenario where an operator interacts with a map on a large display at a distance and issues multimodal commands through speech and manual gestures. The description of the proposed approach and preliminary experiment results are presented.\",\"PeriodicalId\":164924,\"journal\":{\"name\":\"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments\",\"volume\":\"105 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1228175.1228257\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1228175.1228257","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

摘要

多模式用户交互(MMUI)技术旨在建立自然和直观的界面,允许用户以类似于人与人之间交流的方式与计算机交互,例如,通过语音和手势。作为MMUI的关键组成部分,多模态输入融合探索了通过多种模态有效解释用户输入的组合语义解释的方法。提出了一种基于语音和手动指示手势输入的多感官数据融合方法。该技术的有效性已通过实验得到验证,该实验使用了一个交通事故管理场景,在该场景中,操作员与远处的大型显示器上的地图交互,并通过语音和手动手势发出多模式命令。给出了该方法的描述和初步实验结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A novel method for multi-sensory data fusion in multimodal human computer interaction
Multimodal User Interaction (MMUI) technology aims at building natural and intuitive interfaces allowing a user to interact with computer in a way similar to human-to-human communication, for example, through speech and gestures. As a critical component in MMUI, Multimodal Input Fusion explores ways to effectively interpret the combined semantic interpretation of user inputs through multiple modalities. This paper presents a novel approach to multi-sensory data fusion based on speech and manual deictic gesture inputs. The effectiveness of the technique has been validated through experiments, using a traffic incident management scenario where an operator interacts with a map on a large display at a distance and issues multimodal commands through speech and manual gestures. The description of the proposed approach and preliminary experiment results are presented.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
PICTIOL: a case study in participatory design Transient life: collecting and sharing personal information How it feels, not just how it looks: when bodies interact with technology Magistrates and voice recognition: reconceptualising agency "heh - keeps me off the smokes...": probing technology support for personal change
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1