基于并发增强转换网络的自然多模态接口语义融合

IF 2.4 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Multimodal Technologies and Interaction Pub Date : 2018-12-06 DOI:10.3390/MTI2040081
C. Zimmerer, Martin Fischbach, Marc Erich Latoschik
{"title":"基于并发增强转换网络的自然多模态接口语义融合","authors":"C. Zimmerer, Martin Fischbach, Marc Erich Latoschik","doi":"10.3390/MTI2040081","DOIUrl":null,"url":null,"abstract":"Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.","PeriodicalId":52297,"journal":{"name":"Multimodal Technologies and Interaction","volume":"7 1","pages":"81"},"PeriodicalIF":2.4000,"publicationDate":"2018-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3390/MTI2040081","citationCount":"5","resultStr":"{\"title\":\"Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks\",\"authors\":\"C. Zimmerer, Martin Fischbach, Marc Erich Latoschik\",\"doi\":\"10.3390/MTI2040081\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.\",\"PeriodicalId\":52297,\"journal\":{\"name\":\"Multimodal Technologies and Interaction\",\"volume\":\"7 1\",\"pages\":\"81\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2018-12-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.3390/MTI2040081\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multimodal Technologies and Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/MTI2040081\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimodal Technologies and Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/MTI2040081","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 5

摘要

语义融合是许多多模态接口的核心要求。有限状态传感器和增强转换网络等程序性方法已被证明有利于实现语义融合。它们符合用户界面开发中常见的快速开发周期,而机器学习方法则需要花费大量时间进行培训和优化。我们确定了实现语义融合的七个基本要求:动作派生、持续反馈、上下文敏感性、时间关系支持、对交互上下文的访问以及对时间顺序未排序和概率输入的支持。然而,随后的分析显示,目前没有满足后两项要求的解决办法。因此,作为本文的主要贡献,我们提出了并发游标的概念来弥补这些缺点。此外,我们还展示了一个参考实现,即并发增强过渡网络(cATN),该网络通过一系列概念验证演示以及比较基准验证了该概念的可行性。cATN满足了所有确定的需求,并填补了以前解决方案中的不足。它通过五个具体特征来支持多模态接口的快速原型化:它的声明性、底层转换网络的递归性、描述语言的网络抽象构造、所利用的语义查询和词汇信息的抽象层。我们的参考实现过去和现在都用于各种学生项目、论文和硕士课程。它是公开可用的,并展示了即使是非专家也可以有效地实现多模式接口,甚至对于混合现实和虚拟现实中的重要应用程序也是如此。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks
Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivation, continuous feedback, context-sensitivity, temporal relation support, access to the interaction context, as well as the support of chronologically unsorted and probabilistic input. A subsequent analysis reveals, however, that there is currently no solution for fulfilling the latter two requirements. As the main contribution of this article, we thus present the Concurrent Cursor concept to compensate these shortcomings. In addition, we showcase a reference implementation, the Concurrent Augmented Transition Network (cATN), that validates the concept’s feasibility in a series of proof of concept demonstrations as well as through a comparative benchmark. The cATN fulfills all identified requirements and fills the lack amongst previous solutions. It supports the rapid prototyping of multimodal interfaces by means of five concrete traits: Its declarative nature, the recursiveness of the underlying transition network, the network abstraction constructs of its description language, the utilized semantic queries, and an abstraction layer for lexical information. Our reference implementation was and is used in various student projects, theses, as well as master-level courses. It is openly available and showcases that non-experts can effectively implement multimodal interfaces, even for non-trivial applications in mixed and virtual reality.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Multimodal Technologies and Interaction
Multimodal Technologies and Interaction Computer Science-Computer Science Applications
CiteScore
4.90
自引率
8.00%
发文量
94
审稿时长
4 weeks
期刊最新文献
Optical Rules to Mitigate the Parallax-Related Registration Error in See-Through Head-Mounted Displays for the Guidance of Manual Tasks Virtual Reality Assessment of Attention Deficits in Traumatic Brain Injury: Effectiveness and Ecological Validity Applying Cognitive Load Theory to eLearning of Crafts The Role of Haptics in Training and Games for Hearing-Impaired Individuals: A Systematic Review “A Safe Space for Sharing Feelings”: Perspectives of Children with Lived Experiences of Anxiety on Social Robots
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1