Context-Aware Dynamic Presentation Synthesis for Exploratory Multimodal Environments

H. Sridharan, Ankur Mani, H. Sundaram, J. Brungart, David Birchfield
{"title":"Context-Aware Dynamic Presentation Synthesis for Exploratory Multimodal Environments","authors":"H. Sridharan, Ankur Mani, H. Sundaram, J. Brungart, David Birchfield","doi":"10.1109/ICME.2005.1521596","DOIUrl":null,"url":null,"abstract":"In this paper, we develop a novel real-time, interactive, automatic multimodal exploratory environment that dynamically adapts the media presented, to user context. There are two key contributions of this paper-(a) development of multimodal user-context model and (b) modeling the dynamics of the presentation to maximize coherence. We develop a novel user-context model comprising interests, media history, interaction behavior and tasks, that evolves based on the specific interaction. We also develop novel metrics between media elements and the user context. The presentation environment dynamically adapts to the current user context. We develop an optimal media selection and display framework that maximizes coherence, while constrained by the user-context, user goals and the structure of the knowledge in the exploratory environment. The experimental results indicate that the system performs well. The results also show that user-context models significantly improve presentation coherence","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2005 IEEE International Conference on Multimedia and Expo","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME.2005.1521596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In this paper, we develop a novel real-time, interactive, automatic multimodal exploratory environment that dynamically adapts the media presented, to user context. There are two key contributions of this paper-(a) development of multimodal user-context model and (b) modeling the dynamics of the presentation to maximize coherence. We develop a novel user-context model comprising interests, media history, interaction behavior and tasks, that evolves based on the specific interaction. We also develop novel metrics between media elements and the user context. The presentation environment dynamically adapts to the current user context. We develop an optimal media selection and display framework that maximizes coherence, while constrained by the user-context, user goals and the structure of the knowledge in the exploratory environment. The experimental results indicate that the system performs well. The results also show that user-context models significantly improve presentation coherence
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索性多模态环境的上下文感知动态表示合成
在本文中,我们开发了一种新颖的实时、交互式、自动的多模式探索环境,它可以动态地适应呈现的媒体,以适应用户上下文。本文有两个关键贡献:(a)开发了多模态用户-上下文模型;(b)建模了演示的动态,以最大限度地提高连贯性。我们开发了一个新的用户上下文模型,包括兴趣、媒体历史、交互行为和任务,该模型基于特定的交互而发展。我们还开发了媒体元素和用户环境之间的新度量。表示环境动态地适应当前用户上下文。我们开发了一个最佳的媒体选择和展示框架,最大限度地提高连贯性,同时受到用户上下文,用户目标和探索环境中的知识结构的限制。实验结果表明,该系统性能良好。结果还表明,用户-上下文模型显著提高了表达一致性
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Lossless image compression with tree coding of magnitude levels Maximizing the profit for cache replacement in a transcoding proxy Pre-Attentional Filtering in Compressed Video Annotation and detection of blended emotions in real human-human dialogs recorded in a call center Fast inter frame encoding based on modes pre-decision in H.264
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1