单个应用程序模型,多个同步视图

Rafah Hosn, Stephane H Maes, T. Raman
{"title":"单个应用程序模型,多个同步视图","authors":"Rafah Hosn, Stephane H Maes, T. Raman","doi":"10.1109/ICME.2001.1237813","DOIUrl":null,"url":null,"abstract":"User interface is a mean to an end —its primary goal is to capture user intent and communicate the results of the requested computation. On today’s devices, user interaction can be achieved through a multiplicity of interaction modalities including speech and visual interfaces. As we evolve toward an increasingly connected world where we access and interact with applications through multiple devices, it becomes crucial that the various access paths to the underlying content be synchronized. This synchronization ensures that the user interacts with the same underlying content independent of the interaction modality — despite the difference in presentation that each modality might impose. It also ensures that the effect of user interaction in any given modality is reflected consistently across all available modalities. We describe an application framework that enables tightly synchronized multimodal user interaction. This framework derives its power from representing the application model in a modality-independent manner, and by traversing this model to produce the various synchronized multimodal views. As the user interaction proceeds, we maintain our current position in the model and update the application data as determined by user intent, then reflect these updates in the various views being presented. We conclude the paper by outlining an example that demonstrates this tightly synchronized multimodal interaction, and describe some of the future challenges in building such multimodal frameworks.","PeriodicalId":405589,"journal":{"name":"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Single application model, multiple synchronized views\",\"authors\":\"Rafah Hosn, Stephane H Maes, T. Raman\",\"doi\":\"10.1109/ICME.2001.1237813\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"User interface is a mean to an end —its primary goal is to capture user intent and communicate the results of the requested computation. On today’s devices, user interaction can be achieved through a multiplicity of interaction modalities including speech and visual interfaces. As we evolve toward an increasingly connected world where we access and interact with applications through multiple devices, it becomes crucial that the various access paths to the underlying content be synchronized. This synchronization ensures that the user interacts with the same underlying content independent of the interaction modality — despite the difference in presentation that each modality might impose. It also ensures that the effect of user interaction in any given modality is reflected consistently across all available modalities. We describe an application framework that enables tightly synchronized multimodal user interaction. This framework derives its power from representing the application model in a modality-independent manner, and by traversing this model to produce the various synchronized multimodal views. As the user interaction proceeds, we maintain our current position in the model and update the application data as determined by user intent, then reflect these updates in the various views being presented. We conclude the paper by outlining an example that demonstrates this tightly synchronized multimodal interaction, and describe some of the future challenges in building such multimodal frameworks.\",\"PeriodicalId\":405589,\"journal\":{\"name\":\"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICME.2001.1237813\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Conference on Multimedia and Expo, 2001. ICME 2001.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2001.1237813","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

用户界面是达到目的的手段——它的主要目标是捕捉用户意图并传达所请求计算的结果。在今天的设备上,用户交互可以通过多种交互方式实现,包括语音和视觉界面。随着我们向一个日益互联的世界发展,我们通过多个设备访问应用程序并与之交互,同步访问底层内容的各种访问路径变得至关重要。这种同步确保用户与相同的底层内容进行交互,而不受交互模式的影响——尽管每种模式的表示方式可能有所不同。它还确保在任何给定模式下的用户交互效果一致地反映在所有可用模式上。我们描述了一个应用程序框架,它支持紧密同步的多模态用户交互。该框架的强大之处在于,它以一种独立于模态的方式表示应用程序模型,并通过遍历该模型来生成各种同步的多模态视图。随着用户交互的进行,我们保持我们在模型中的当前位置,并根据用户意图更新应用程序数据,然后在呈现的各种视图中反映这些更新。我们通过概述一个示例来展示这种紧密同步的多模态交互,并描述构建这种多模态框架的一些未来挑战来结束本文。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Single application model, multiple synchronized views
User interface is a mean to an end —its primary goal is to capture user intent and communicate the results of the requested computation. On today’s devices, user interaction can be achieved through a multiplicity of interaction modalities including speech and visual interfaces. As we evolve toward an increasingly connected world where we access and interact with applications through multiple devices, it becomes crucial that the various access paths to the underlying content be synchronized. This synchronization ensures that the user interacts with the same underlying content independent of the interaction modality — despite the difference in presentation that each modality might impose. It also ensures that the effect of user interaction in any given modality is reflected consistently across all available modalities. We describe an application framework that enables tightly synchronized multimodal user interaction. This framework derives its power from representing the application model in a modality-independent manner, and by traversing this model to produce the various synchronized multimodal views. As the user interaction proceeds, we maintain our current position in the model and update the application data as determined by user intent, then reflect these updates in the various views being presented. We conclude the paper by outlining an example that demonstrates this tightly synchronized multimodal interaction, and describe some of the future challenges in building such multimodal frameworks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The ITEA project EUROPA, a software platform for digital CE appliances Speech bandwidth extension A music similarity function based on signal analysis A beat-pattern based error concealment scheme for music delivery with burst packet loss Analysis of cache efficiency in 2D wavelet transform
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1