首页 > 最新文献

ICMI-MLMI '10最新文献

英文 中文
Language and thought: talking, gesturing (and signing) about space 语言和思想:关于空间的谈话、手势(和手势)
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891905
J. Haviland
Recent research has reopened debates about (neo)Whorfian claims that the language one speaks has an impact on how one thinks---long discounted by mainstream linguistics and anthropology alike. Some of the most striking evidence for such possible impact derives, not surprisingly, from understudied "exotic" languages and, somewhat more surprisingly, from multimodal and notably gestural practices in communities which speak them. In particular, some of my own work on GuuguYimithirr, a Paman language spoken by Aboriginal people in northeastern Australia, and on Tzotzil, a language spoken by Mayan peasants in southeastern Mexico, suggests strong connections between linguistic expressions of spatial relations, gestural practices in talking about location and motion, and cognitive representations of space---what have come to be called spatial "Frames of Reference." In this talk, I will present some of the evidence for such connections, and add to the mix evidence from an emerging, first generation sign language developed spontaneously in a single family by deaf siblings who have had contact with neither other deaf people nor any other sign language.
最近的研究重新引发了关于(新)沃尔夫的说法的争论,即一个人说的语言会影响一个人的思维方式——这一观点长期以来被主流语言学和人类学所忽视。对于这种可能的影响,一些最引人注目的证据来自研究不足的“外来”语言,而更令人惊讶的是,来自使用这些语言的社区的多模态和明显的手势练习。特别是,我自己的一些关于GuuguYimithirr(澳大利亚东北部土著居民使用的一种帕曼语)和Tzotzil(墨西哥东南部玛雅农民使用的一种语言)的研究表明,空间关系的语言表达、谈论位置和运动的手势实践和空间的认知表征之间存在着很强的联系——这些被称为空间“参考框架”。在这次演讲中,我将展示一些这种联系的证据,并增加来自一个新出现的第一代手语的证据,这些手语是由一个聋哑家庭的兄弟姐妹自发发展起来的,他们既没有接触过其他聋哑人,也没有接触过任何其他手语。
{"title":"Language and thought: talking, gesturing (and signing) about space","authors":"J. Haviland","doi":"10.1145/1891903.1891905","DOIUrl":"https://doi.org/10.1145/1891903.1891905","url":null,"abstract":"Recent research has reopened debates about (neo)Whorfian claims that the language one speaks has an impact on how one thinks---long discounted by mainstream linguistics and anthropology alike. Some of the most striking evidence for such possible impact derives, not surprisingly, from understudied \"exotic\" languages and, somewhat more surprisingly, from multimodal and notably gestural practices in communities which speak them. In particular, some of my own work on GuuguYimithirr, a Paman language spoken by Aboriginal people in northeastern Australia, and on Tzotzil, a language spoken by Mayan peasants in southeastern Mexico, suggests strong connections between linguistic expressions of spatial relations, gestural practices in talking about location and motion, and cognitive representations of space---what have come to be called spatial \"Frames of Reference.\" In this talk, I will present some of the evidence for such connections, and add to the mix evidence from an emerging, first generation sign language developed spontaneously in a single family by deaf siblings who have had contact with neither other deaf people nor any other sign language.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132868408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling multimodal discourse for the blind 为盲人提供多模态话语
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891927
Francisco Oliveira, H. Cowan, Bing Fang, Francis K. H. Quek
This paper presents research that shows that a high degree of skilled performance is required for multimodal discourse support. We discuss how students who are blind or visually impaired (SBVI) were able to understand the instructor's pointing gestures during planar geometry and trigonometry classes. For that, the SBVI must attend to the instructor's speech and have simultaneous access to the instructional graphic material, and to the where the instructor is pointing. We developed the Haptic Deictic System - HDS, capable of tracking the instructor's pointing and informing the SBVI, through a haptic glove, where she needs to move her hand understand the instructor's illustration-augmented discourse. Several challenges had to be overcome before the SBVI were able to engage in fluid multimodal discourse with the help of the HDS. We discuss how such challenges were addressed with respect to perception and discourse (especially to mathematics instruction).
本文的研究表明,多模态语篇支持需要高度的熟练表现。我们讨论了学生谁是盲人或视障(SBVI)能够理解教师的指指手势在平面几何和三角课程。为此,SBVI必须参加教师的演讲,并同时访问教学图形材料,并到教师所指向的地方。我们开发了Haptic Deictic System - HDS,能够通过触觉手套跟踪教练的指示并通知SBVI,她需要在哪里移动她的手来理解教练的插图增强话语。必须克服若干挑战,才能使SBVI能够在HDS的帮助下进行流畅的多模态对话。我们讨论了如何在感知和话语(特别是数学教学)方面解决这些挑战。
{"title":"Enabling multimodal discourse for the blind","authors":"Francisco Oliveira, H. Cowan, Bing Fang, Francis K. H. Quek","doi":"10.1145/1891903.1891927","DOIUrl":"https://doi.org/10.1145/1891903.1891927","url":null,"abstract":"This paper presents research that shows that a high degree of skilled performance is required for multimodal discourse support. We discuss how students who are blind or visually impaired (SBVI) were able to understand the instructor's pointing gestures during planar geometry and trigonometry classes. For that, the SBVI must attend to the instructor's speech and have simultaneous access to the instructional graphic material, and to the where the instructor is pointing. We developed the Haptic Deictic System - HDS, capable of tracking the instructor's pointing and informing the SBVI, through a haptic glove, where she needs to move her hand understand the instructor's illustration-augmented discourse. Several challenges had to be overcome before the SBVI were able to engage in fluid multimodal discourse with the help of the HDS. We discuss how such challenges were addressed with respect to perception and discourse (especially to mathematics instruction).","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130937112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multimodal interactive machine translation 多模式交互机器翻译
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891960
Vicente Alabau, Daniel Ortiz-Martínez, A. Sanchís, F. Casacuberta
Interactive machine translation (IMT) [1] is an alternative approach to machine translation, integrating human expertise into the automatic translation process. In this framework, a human iteratively interacts with a system until the output desired by the human is completely generated. Traditionally, interaction has been performed using a keyboard and a mouse. However, the use of touchscreens has been popularised recently. Many touchscreen devices already exist in the market, namely mobile phones, laptops and tablet computers like the iPad. In this work, we propose a new interaction modality to take advantage of such devices, for which online handwritten text seems a very natural way of input. Multimodality is formulated as an extension to the traditional IMT protocol where the user can amend errors by writing text with an electronic pen or a stylus on a touchscreen. Different approaches to modality fusion have been studied. In addition, these approaches have been assessed on the Xerox task. Finally, a thorough study of the errors committed by the online handwritten system will show future work directions.
交互式机器翻译(IMT)[1]是机器翻译的另一种方法,将人类专业知识集成到自动翻译过程中。在这个框架中,人类迭代地与系统交互,直到人类所需的输出完全生成。传统上,交互是使用键盘和鼠标进行的。然而,触摸屏的使用最近开始普及。市面上已经有很多触屏设备,比如手机、笔记本电脑和iPad等平板电脑。在这项工作中,我们提出了一种新的交互方式来利用这些设备,在线手写文本似乎是一种非常自然的输入方式。多模态是传统IMT协议的扩展,用户可以通过在触摸屏上用电子笔或触控笔书写文本来修正错误。不同的调式融合方法已经被研究过。此外,这些方法已经在施乐的任务中进行了评估。最后,对在线手写系统的错误进行深入研究,为今后的工作指明方向。
{"title":"Multimodal interactive machine translation","authors":"Vicente Alabau, Daniel Ortiz-Martínez, A. Sanchís, F. Casacuberta","doi":"10.1145/1891903.1891960","DOIUrl":"https://doi.org/10.1145/1891903.1891960","url":null,"abstract":"Interactive machine translation (IMT) [1] is an alternative approach to machine translation, integrating human expertise into the automatic translation process. In this framework, a human iteratively interacts with a system until the output desired by the human is completely generated. Traditionally, interaction has been performed using a keyboard and a mouse. However, the use of touchscreens has been popularised recently. Many touchscreen devices already exist in the market, namely mobile phones, laptops and tablet computers like the iPad. In this work, we propose a new interaction modality to take advantage of such devices, for which online handwritten text seems a very natural way of input. Multimodality is formulated as an extension to the traditional IMT protocol where the user can amend errors by writing text with an electronic pen or a stylus on a touchscreen. Different approaches to modality fusion have been studied. In addition, these approaches have been assessed on the Xerox task. Finally, a thorough study of the errors committed by the online handwritten system will show future work directions.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116356177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Vocal sketching: a prototype tool for designing multimodal interaction 声音素描:设计多模态交互的原型工具
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891956
Koray Tahiroglu, T. Ahmaniemi
Dynamic audio feedback enriches the interaction with a mobile device. Novel sensor technologies and audio synthesis tools provide infinite number of possibilities to design the interaction between the sensory input and audio output. This paper presents a study where vocal sketching was used as prototype method to grasp ideas and expectations in early stages of designing multimodal interaction. We introduce an experiment where a graspable mobile device was given to the participants and urged to sketch vocally the sounds to be produced when using the device in a communication and musical expression scenarios. The sensory input methods were limited to gestures such as touch, squeeze and movements. Vocal sketching let us to examine closer how gesture and sound could be coupled in the use of our prototype device, such as moving the device upwards with elevating pitch. The results reported in this paper have already informed our opinions and expectations towards the actual design phase of the audio modality.
动态音频反馈丰富了与移动设备的交互。新颖的传感器技术和音频合成工具为设计感官输入和音频输出之间的交互提供了无限的可能性。本文介绍了一项研究,在设计多模态交互的早期阶段,使用声音草图作为原型方法来把握想法和期望。我们介绍了一个实验,给参与者一个可抓的移动设备,并敦促他们在交流和音乐表达场景中使用该设备时所产生的声音。感官输入方法仅限于触摸、挤压和运动等手势。声音草图让我们更深入地研究手势和声音在使用原型设备时是如何结合在一起的,比如通过提高音高向上移动设备。本文中报告的结果已经告知了我们对音频模态实际设计阶段的意见和期望。
{"title":"Vocal sketching: a prototype tool for designing multimodal interaction","authors":"Koray Tahiroglu, T. Ahmaniemi","doi":"10.1145/1891903.1891956","DOIUrl":"https://doi.org/10.1145/1891903.1891956","url":null,"abstract":"Dynamic audio feedback enriches the interaction with a mobile device. Novel sensor technologies and audio synthesis tools provide infinite number of possibilities to design the interaction between the sensory input and audio output. This paper presents a study where vocal sketching was used as prototype method to grasp ideas and expectations in early stages of designing multimodal interaction. We introduce an experiment where a graspable mobile device was given to the participants and urged to sketch vocally the sounds to be produced when using the device in a communication and musical expression scenarios. The sensory input methods were limited to gestures such as touch, squeeze and movements. Vocal sketching let us to examine closer how gesture and sound could be coupled in the use of our prototype device, such as moving the device upwards with elevating pitch. The results reported in this paper have already informed our opinions and expectations towards the actual design phase of the audio modality.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134190887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Component-based high fidelity interactive prototyping of post-WIMP interactions 基于组件的后wimp交互的高保真交互原型
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891961
Jean-Yves Lionel Lawson, Mathieu Coterot, C. Carincotte, B. Macq
In order to support interactive high-fidelity prototyping of post-WIMP user interactions, we propose a multi-fidelity design method based on a unifying component-based model and supported by an advanced tool suite, the OpenInterface Platform Workbench. Our approach strives for supporting a collaborative (programmer-designer) and user-centered design activity. The workbench architecture allows exploration of novel interaction techniques through seamless integration and adaptation of heterogeneous components, high-fidelity rapid prototyping, runtime evaluation and fine-tuning of designed systems. This paper illustrates through the iterative construction of a running example how OpenInterface allows the leverage of existing resources and fosters the creation of non-conventional interaction techniques.
为了支持后wimp用户交互的交互式高保真原型,我们提出了一种基于统一的基于组件的模型的多保真设计方法,并由先进的工具套件OpenInterface Platform Workbench支持。我们的方法致力于支持协作(程序员-设计师)和以用户为中心的设计活动。工作台架构允许通过无缝集成和异构组件的适配、高保真快速原型、运行时评估和设计系统的微调来探索新的交互技术。本文通过一个运行示例的迭代构建来说明OpenInterface如何允许利用现有资源并促进非常规交互技术的创建。
{"title":"Component-based high fidelity interactive prototyping of post-WIMP interactions","authors":"Jean-Yves Lionel Lawson, Mathieu Coterot, C. Carincotte, B. Macq","doi":"10.1145/1891903.1891961","DOIUrl":"https://doi.org/10.1145/1891903.1891961","url":null,"abstract":"In order to support interactive high-fidelity prototyping of post-WIMP user interactions, we propose a multi-fidelity design method based on a unifying component-based model and supported by an advanced tool suite, the OpenInterface Platform Workbench. Our approach strives for supporting a collaborative (programmer-designer) and user-centered design activity. The workbench architecture allows exploration of novel interaction techniques through seamless integration and adaptation of heterogeneous components, high-fidelity rapid prototyping, runtime evaluation and fine-tuning of designed systems. This paper illustrates through the iterative construction of a running example how OpenInterface allows the leverage of existing resources and fosters the creation of non-conventional interaction techniques.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130127209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Analysis environment of conversational structure with nonverbal multimodal data 非语言多模态数据会话结构分析环境
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891958
Y. Sumi, M. Yano, T. Nishida
This paper shows the IMADE (Interaction Measurement, Analysis, and Design Environment) project to build a recording and anlyzing environment of human conversational interactions. The IMADE room is designed to record audio/visual, human-motion, eye gazing data for building interaction corpus mainly focusing on understanding of human nonverbal behaviors. In this paper, we show the notion of interaction corpus and iCorpusStudio, software environment for browsing and analyzing the interaction corpus. We also present a preliminary experiment on multiparty conversations.
本文展示了IMADE(交互测量、分析和设计环境)项目,以建立一个记录和分析人类会话交互的环境。IMADE房间旨在记录音频/视觉,人体运动,眼睛凝视数据,以建立交互语料库,主要关注对人类非语言行为的理解。本文提出了交互语料库的概念,并介绍了交互语料库浏览和分析的软件环境iCorpusStudio。我们还提出了一个关于多方对话的初步实验。
{"title":"Analysis environment of conversational structure with nonverbal multimodal data","authors":"Y. Sumi, M. Yano, T. Nishida","doi":"10.1145/1891903.1891958","DOIUrl":"https://doi.org/10.1145/1891903.1891958","url":null,"abstract":"This paper shows the IMADE (Interaction Measurement, Analysis, and Design Environment) project to build a recording and anlyzing environment of human conversational interactions. The IMADE room is designed to record audio/visual, human-motion, eye gazing data for building interaction corpus mainly focusing on understanding of human nonverbal behaviors. In this paper, we show the notion of interaction corpus and iCorpusStudio, software environment for browsing and analyzing the interaction corpus. We also present a preliminary experiment on multiparty conversations.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117275371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Quantifying group problem solving with stochastic analysis 用随机分析定量解决群体问题
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891954
Wen Dong, A. Pentland
Quantifying the relationship between group dynamics and group performance is a key issue of increasing group performance. In this paper, we will discuss how group performance is related to several heuristics about group dynamics in performing several typical tasks. We will also give our novel stochastic modeling in learning the structure of group dynamics. Our performance estimators account for between 40 and 60% of the variance across range of group problem solving tasks.
量化团队动力与团队绩效之间的关系是提高团队绩效的关键问题。在本文中,我们将讨论在执行几个典型任务时,群体绩效如何与一些关于群体动力学的启发式相关。在学习群体动力学结构的过程中,我们也将给出新的随机模型。我们的绩效评估者在团队解决问题的任务范围内占到40%到60%的差异。
{"title":"Quantifying group problem solving with stochastic analysis","authors":"Wen Dong, A. Pentland","doi":"10.1145/1891903.1891954","DOIUrl":"https://doi.org/10.1145/1891903.1891954","url":null,"abstract":"Quantifying the relationship between group dynamics and group performance is a key issue of increasing group performance. In this paper, we will discuss how group performance is related to several heuristics about group dynamics in performing several typical tasks. We will also give our novel stochastic modeling in learning the structure of group dynamics. Our performance estimators account for between 40 and 60% of the variance across range of group problem solving tasks.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121007646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Mood avatar: automatic text-driven head motion synthesis 情绪化身:自动文本驱动的头部运动合成
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891951
Kaihui Mu, J. Tao, Jianfeng Che, Minghao Yang
Natural head motion is an indispensable part of realistic facial animation. This paper presents a novel approach to synthesize natural head motion automatically based on grammatical and prosodic features, which are extracted by the text analysis part of a Chinese Text-to-Speech (TTS) system. A two-layer clustering method is proposed to determine elementary head motion patterns from a multimodal database which covers six emotional states. The mapping problem between textual information and elementary head motion patterns is modeled by Classification and Regression Trees (CART). With the emotional state specified by users, results from text analysis are utilized to drive corresponding CART model to create emotional head motion sequence. Then, the generated sequence is interpolated by spineand us ed to drive a Chinese text-driven avatar. The comparison experiment indicates that this approach provides a better head motion and an engaging human-computer comparing to random or none head motion.
自然的头部运动是逼真的面部动画不可缺少的一部分。本文提出了一种基于汉语文本到语音(TTS)系统文本分析部分提取的语法和韵律特征自动合成自然头部运动的新方法。提出了一种两层聚类方法,从包含六种情绪状态的多模态数据库中确定基本头部运动模式。利用分类回归树(CART)对文本信息与基本头部运动模式之间的映射问题进行建模。根据用户指定的情绪状态,利用文本分析结果驱动相应的CART模型,生成情绪头部运动序列。然后,将生成的序列通过spine进行插值,并驱动一个中文文本驱动的化身。对比实验表明,与随机或无头部运动相比,该方法提供了更好的头部运动和引人入胜的人机交互。
{"title":"Mood avatar: automatic text-driven head motion synthesis","authors":"Kaihui Mu, J. Tao, Jianfeng Che, Minghao Yang","doi":"10.1145/1891903.1891951","DOIUrl":"https://doi.org/10.1145/1891903.1891951","url":null,"abstract":"Natural head motion is an indispensable part of realistic facial animation. This paper presents a novel approach to synthesize natural head motion automatically based on grammatical and prosodic features, which are extracted by the text analysis part of a Chinese Text-to-Speech (TTS) system. A two-layer clustering method is proposed to determine elementary head motion patterns from a multimodal database which covers six emotional states. The mapping problem between textual information and elementary head motion patterns is modeled by Classification and Regression Trees (CART). With the emotional state specified by users, results from text analysis are utilized to drive corresponding CART model to create emotional head motion sequence. Then, the generated sequence is interpolated by spineand us ed to drive a Chinese text-driven avatar. The comparison experiment indicates that this approach provides a better head motion and an engaging human-computer comparing to random or none head motion.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"511 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces 手势和语音原型在多模态界面的社会可接受性的早期评估
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891925
J. Williamson, S. Brewster
Interaction techniques that require users to adopt new behaviors mean that designers must take into account social acceptability and user experience otherwise the techniques may be rejected by users as they are too embarrassing to do in public. This research uses a set of low cost prototypes to study social acceptability and user perceptions of multimodal mobile interaction techniques early on in the design process. We describe 4 prototypes that were used with 8 focus groups to evaluate user perceptions of novel multimodal interactions using gesture, speech and nonspeech sounds, and gain feedback about the usefulness of the prototypes for studying social acceptability. The results of this research describe user perceptions of social acceptability and the realities of using multimodal interaction techniques in daily life. The results also describe key differences between young users (18-29) and older users (70-95) with respect to evaluation and approach to understanding these interaction techniques.
需要用户采用新行为的交互技术意味着设计师必须考虑社会可接受性和用户体验,否则可能会因为在公共场合太尴尬而被用户拒绝。本研究使用一组低成本的原型来研究在设计过程的早期对多模式移动交互技术的社会接受度和用户感知。我们描述了4个原型,并与8个焦点小组一起使用,以评估使用手势、语音和非语音的新型多模态交互的用户感知,并获得关于原型在研究社会可接受性方面的有用性的反馈。本研究的结果描述了用户对社会可接受性的看法,以及在日常生活中使用多模式交互技术的现实情况。结果还描述了年轻用户(18-29岁)和老年用户(70-95岁)在评估和理解这些交互技术的方法方面的关键差异。
{"title":"Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces","authors":"J. Williamson, S. Brewster","doi":"10.1145/1891903.1891925","DOIUrl":"https://doi.org/10.1145/1891903.1891925","url":null,"abstract":"Interaction techniques that require users to adopt new behaviors mean that designers must take into account social acceptability and user experience otherwise the techniques may be rejected by users as they are too embarrassing to do in public. This research uses a set of low cost prototypes to study social acceptability and user perceptions of multimodal mobile interaction techniques early on in the design process. We describe 4 prototypes that were used with 8 focus groups to evaluate user perceptions of novel multimodal interactions using gesture, speech and nonspeech sounds, and gain feedback about the usefulness of the prototypes for studying social acceptability. The results of this research describe user perceptions of social acceptability and the realities of using multimodal interaction techniques in daily life. The results also describe key differences between young users (18-29) and older users (70-95) with respect to evaluation and approach to understanding these interaction techniques.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122183384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Analyzing multimodal time series as dynamical systems 分析多模态时间序列作为动力系统
Pub Date : 2010-11-08 DOI: 10.1145/1891903.1891968
S. Hidaka, Chen Yu
We propose a novel approach to discovering latent structures from multimodal time series. We view a time series as observed data from an underlying dynamical system. In this way, analyzing multimodal time series can be viewed as finding latent structures from dynamical systems. In light this, our approach is based on the concept of generating partition which is the theoretically best symbolization of time series maximizing the information of the underlying original continuous dynamical system. However, generating partition is difficult to achieve for time series without explicit dynamical equations. Different from most previous approaches that attempt to approximate generating partition through various deterministic symbolization processes, our algorithm maintains and estimates a probabilistic distribution over a symbol set for each data point in a time series. To do so, we develop a Bayesian framework for probabilistic symbolization and demonstrate that the approach can be successfully applied to both simulated data and empirical data from multimodal agent-agent interactions. We suggest this unsupervised learning algorithm has a potential to be used in various multimodal datasets as first steps to identify underlying structures between temporal variables.
我们提出了一种从多模态时间序列中发现潜在结构的新方法。我们把时间序列看作是来自底层动力系统的观测数据。通过这种方式,分析多模态时间序列可以看作是从动力系统中寻找潜在结构。鉴于此,我们的方法是基于生成分区的概念,这是理论上最好的时间序列符号最大化底层原始连续动力系统的信息。然而,对于没有显式动力学方程的时间序列,很难实现分区的生成。与之前大多数试图通过各种确定性符号化过程来近似生成分区的方法不同,我们的算法维护并估计时间序列中每个数据点的符号集上的概率分布。为此,我们开发了一个概率符号化的贝叶斯框架,并证明该方法可以成功地应用于多模态代理-代理交互的模拟数据和经验数据。我们认为这种无监督学习算法有潜力用于各种多模态数据集,作为识别时间变量之间潜在结构的第一步。
{"title":"Analyzing multimodal time series as dynamical systems","authors":"S. Hidaka, Chen Yu","doi":"10.1145/1891903.1891968","DOIUrl":"https://doi.org/10.1145/1891903.1891968","url":null,"abstract":"We propose a novel approach to discovering latent structures from multimodal time series. We view a time series as observed data from an underlying dynamical system. In this way, analyzing multimodal time series can be viewed as finding latent structures from dynamical systems. In light this, our approach is based on the concept of generating partition which is the theoretically best symbolization of time series maximizing the information of the underlying original continuous dynamical system. However, generating partition is difficult to achieve for time series without explicit dynamical equations. Different from most previous approaches that attempt to approximate generating partition through various deterministic symbolization processes, our algorithm maintains and estimates a probabilistic distribution over a symbol set for each data point in a time series. To do so, we develop a Bayesian framework for probabilistic symbolization and demonstrate that the approach can be successfully applied to both simulated data and empirical data from multimodal agent-agent interactions. We suggest this unsupervised learning algorithm has a potential to be used in various multimodal datasets as first steps to identify underlying structures between temporal variables.","PeriodicalId":181145,"journal":{"name":"ICMI-MLMI '10","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116874304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
ICMI-MLMI '10
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1