首页 > 最新文献

IUI. International Conference on Intelligent User Interfaces最新文献

英文 中文
CAVIAR: a vibrotactile device for accessible reaching 鱼子酱:一种可触及的振动触觉装置
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167009
Sina Bahram, Arpan Chakraborty, R. Amant
CAVIAR is designed to aid people with vision impairment in locating, identifying, and acquiring objects in their peripersonal space. A mobile phone, worn on the chest, captures video in front of the user; the computer vision component locates the user's hand and objects in the video stream. The auditory component informs the user about the presence of objects. On user confirmation, the reaching component sends signals to vibrotactile actuators on the user's wristband, guiding the hand to a specific object. This paper describes an end-to-end prototype of CAVIAR and its formative evaluation.
CAVIAR旨在帮助视力受损的人定位、识别和获取周围空间中的物体。佩戴在胸前的手机可以在使用者面前拍摄视频;计算机视觉组件在视频流中定位用户的手和物体。听觉组件告知用户物体的存在。在用户确认后,到达组件向用户腕带上的振动触觉执行器发送信号,引导手到达特定物体。本文描述了CAVIAR的端到端原型及其形成性评价。
{"title":"CAVIAR: a vibrotactile device for accessible reaching","authors":"Sina Bahram, Arpan Chakraborty, R. Amant","doi":"10.1145/2166966.2167009","DOIUrl":"https://doi.org/10.1145/2166966.2167009","url":null,"abstract":"CAVIAR is designed to aid people with vision impairment in locating, identifying, and acquiring objects in their peripersonal space. A mobile phone, worn on the chest, captures video in front of the user; the computer vision component locates the user's hand and objects in the video stream. The auditory component informs the user about the presence of objects. On user confirmation, the reaching component sends signals to vibrotactile actuators on the user's wristband, guiding the hand to a specific object. This paper describes an end-to-end prototype of CAVIAR and its formative evaluation.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89003311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Where do facebook intelligent lists come from? facebook智能列表从何而来?
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167020
Fatoumata G. Camara, Gaëlle Calvary, Rachel Demumieux, N. Mandran
On September 19th 2011, Facebook introduced "Intelligent Lists" which are Friends Lists (FL) automatically created and pre-filled based on users' and their contacts' profiles information (education, work, city of living, kin, etc.). In early 2011, we conducted a study on contact management in Facebook in order to understand users' real needs. Outcomes from this study suggest several recommendations, some of which can be found today in the Facebook Intelligent Lists. This paper provides explanations on the recent evolution in Facebook contact management. The user study involved 148 participants. From their Facebook accounts, we retrieved 340 Friends Lists and 347 family ties. In the overall, the study has led to numerous interesting outocomes. In this paper, we focus on those related to Friends Lists and, particularly, on recommendations that have not yet been implemented in Facebook.
2011年9月19日,Facebook推出了“智能列表”,即根据用户及其联系人的个人资料信息(教育、工作、居住城市、亲属等)自动创建和预先填充的好友列表(FL)。2011年初,为了了解用户的真实需求,我们对Facebook的联系人管理进行了研究。这项研究的结果提出了一些建议,其中一些可以在今天的Facebook智能列表中找到。本文对Facebook联系人管理的最新发展提供了解释。这项用户研究涉及148名参与者。从他们的Facebook账户中,我们检索到了340个好友列表和347个家庭关系。总的来说,这项研究得出了许多有趣的结果。在本文中,我们将重点关注那些与好友列表相关的内容,特别是那些尚未在Facebook上实现的推荐内容。
{"title":"Where do facebook intelligent lists come from?","authors":"Fatoumata G. Camara, Gaëlle Calvary, Rachel Demumieux, N. Mandran","doi":"10.1145/2166966.2167020","DOIUrl":"https://doi.org/10.1145/2166966.2167020","url":null,"abstract":"On September 19th 2011, Facebook introduced \"Intelligent Lists\" which are Friends Lists (FL) automatically created and pre-filled based on users' and their contacts' profiles information (education, work, city of living, kin, etc.). In early 2011, we conducted a study on contact management in Facebook in order to understand users' real needs. Outcomes from this study suggest several recommendations, some of which can be found today in the Facebook Intelligent Lists.\u0000 This paper provides explanations on the recent evolution in Facebook contact management. The user study involved 148 participants. From their Facebook accounts, we retrieved 340 Friends Lists and 347 family ties. In the overall, the study has led to numerous interesting outocomes. In this paper, we focus on those related to Friends Lists and, particularly, on recommendations that have not yet been implemented in Facebook.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89884725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluating an organic interface for learning mathematics 评估学习数学的有机界面
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167045
Bee Suan Wong
The current formats used for presenting mathematics either on paper or in electronic form have usability limitations that make learning mathematics challenging. The concept of an Organic User Interface, promises a natural interface that blends with the human ecology system and therefore affords smoother transition and improved usability. This research aims to examine how the affordances of an Organic User Interface influence users learning of important mathematical concepts. The relationship between learning time and the usability factors, or affordances of an Organic User Interface will be determined and contrasted with those of Graphical User Interfaces.
目前用于在纸上或电子形式上呈现数学的格式存在可用性限制,这使得学习数学具有挑战性。有机用户界面的概念,承诺与人类生态系统融合的自然界面,因此提供更顺畅的过渡和改进的可用性。本研究旨在探讨有机用户界面的可视性如何影响用户对重要数学概念的学习。学习时间和可用性因素之间的关系,或者有机用户界面的可视性将被确定,并与图形用户界面进行对比。
{"title":"Evaluating an organic interface for learning mathematics","authors":"Bee Suan Wong","doi":"10.1145/2166966.2167045","DOIUrl":"https://doi.org/10.1145/2166966.2167045","url":null,"abstract":"The current formats used for presenting mathematics either on paper or in electronic form have usability limitations that make learning mathematics challenging. The concept of an Organic User Interface, promises a natural interface that blends with the human ecology system and therefore affords smoother transition and improved usability. This research aims to examine how the affordances of an Organic User Interface influence users learning of important mathematical concepts. The relationship between learning time and the usability factors, or affordances of an Organic User Interface will be determined and contrasted with those of Graphical User Interfaces.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77387912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile texting: can post-ASR correction solve the issues? an experimental study on gain vs. costs 手机短信:asr后的修正能解决问题吗?收益与成本的实验研究
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166974
M. Feld, S. Momtazi, F. Freigang, D. Klakow, Christian A. Müller
The next big step in embedded, mobile speech recognition will be to allow completely free input as it is needed for messaging like SMS or email. However, unconstrained dictation remains error-prone, especially when the environment is noisy. In this paper, we compare different methods for improving a given free-text dictation system used to enter textbased messages in embedded mobile scenarios, where distraction, interaction cost, and hardware limitations enforce strict constraints over traditional scenarios. We present a corpus-based evaluation, measuring the trade-off between improvement of the word error rate versus the interaction steps that are required under various parameters. Results show that by post-processing the output of a "black box" speech recognizer (e.g. a web-based speech recognition service), a reduction of word error rate by 55% (10.3% abs.) can be obtained. For further error reduction, however, a richer representation of the original hypotheses (e.g. lattice) is necessary.
嵌入式移动语音识别的下一个重要步骤将是允许完全自由的输入,因为它需要像短信或电子邮件这样的消息传递。然而,不受约束的听写仍然容易出错,尤其是在环境嘈杂的情况下。在本文中,我们比较了不同的方法来改进给定的自由文本听写系统,该系统用于在嵌入式移动场景中输入基于文本的消息,其中干扰、交互成本和硬件限制比传统场景强制执行严格的约束。我们提出了一个基于语料库的评估,衡量在不同参数下,单词错误率的改善与所需的交互步骤之间的权衡。结果表明,通过对“黑盒”语音识别器(例如基于web的语音识别服务)的输出进行后处理,可以将单词错误率降低55% (10.3% abs.)。然而,为了进一步减小误差,原始假设的更丰富的表示(例如格)是必要的。
{"title":"Mobile texting: can post-ASR correction solve the issues? an experimental study on gain vs. costs","authors":"M. Feld, S. Momtazi, F. Freigang, D. Klakow, Christian A. Müller","doi":"10.1145/2166966.2166974","DOIUrl":"https://doi.org/10.1145/2166966.2166974","url":null,"abstract":"The next big step in embedded, mobile speech recognition will be to allow completely free input as it is needed for messaging like SMS or email. However, unconstrained dictation remains error-prone, especially when the environment is noisy. In this paper, we compare different methods for improving a given free-text dictation system used to enter textbased messages in embedded mobile scenarios, where distraction, interaction cost, and hardware limitations enforce strict constraints over traditional scenarios. We present a corpus-based evaluation, measuring the trade-off between improvement of the word error rate versus the interaction steps that are required under various parameters. Results show that by post-processing the output of a \"black box\" speech recognizer (e.g. a web-based speech recognition service), a reduction of word error rate by 55% (10.3% abs.) can be obtained. For further error reduction, however, a richer representation of the original hypotheses (e.g. lattice) is necessary.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73212285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Image registration for text-gaze alignment 文本凝视对齐的图像配准
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167012
Pascual Martínez-Gómez, Chen Chen, T. Hara, Yoshinobu Kano, Akiko Aizawa
Applications using eye-tracking devices need a higher accuracy in recognition when the task reaches a certain complexity. Thus, more sophisticated methods to correct eye-tracking measurement errors are necessary to lower the penetration barrier of eye-trackers in unconstrained tasks. We propose to take advantage of the content or the structure of textual information displayed on the screen to build informed error-correction algorithms that generalize well. The idea is to use feature-based image registration techniques to perform a linear transformation of gaze coordinates to find a good alignment with text printed on the screen. In order to estimate the parameters of the linear transformation, three optimization strategies are proposed to avoid the problem of local minima, namely Monte Carlo, multi-resolution and multi-blur optimization. Experimental results show that a more precise alignment of gaze data with words on the screen can be achieved by using these methods, allowing a more reliable use of eye-trackers in complex and unconstrained tasks.
当任务达到一定的复杂性时,使用眼动追踪设备的应用程序需要更高的识别精度。因此,需要更复杂的方法来纠正眼动追踪测量误差,以降低眼动追踪器在无约束任务中的渗透障碍。我们建议利用屏幕上显示的文本信息的内容或结构来构建泛化良好的知情纠错算法。这个想法是使用基于特征的图像配准技术来执行凝视坐标的线性变换,以找到与屏幕上打印的文本的良好对齐。为了估计线性变换的参数,提出了三种优化策略,即蒙特卡罗优化、多分辨率优化和多模糊优化,以避免局部最小值问题。实验结果表明,使用这些方法可以更精确地将注视数据与屏幕上的单词对齐,从而使眼动仪在复杂和无约束的任务中更可靠地使用。
{"title":"Image registration for text-gaze alignment","authors":"Pascual Martínez-Gómez, Chen Chen, T. Hara, Yoshinobu Kano, Akiko Aizawa","doi":"10.1145/2166966.2167012","DOIUrl":"https://doi.org/10.1145/2166966.2167012","url":null,"abstract":"Applications using eye-tracking devices need a higher accuracy in recognition when the task reaches a certain complexity. Thus, more sophisticated methods to correct eye-tracking measurement errors are necessary to lower the penetration barrier of eye-trackers in unconstrained tasks. We propose to take advantage of the content or the structure of textual information displayed on the screen to build informed error-correction algorithms that generalize well. The idea is to use feature-based image registration techniques to perform a linear transformation of gaze coordinates to find a good alignment with text printed on the screen. In order to estimate the parameters of the linear transformation, three optimization strategies are proposed to avoid the problem of local minima, namely Monte Carlo, multi-resolution and multi-blur optimization. Experimental results show that a more precise alignment of gaze data with words on the screen can be achieved by using these methods, allowing a more reliable use of eye-trackers in complex and unconstrained tasks.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88179419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
RadSpeech's mobile dialogue system for radiologists RadSpeech的放射科医生移动对话系统
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167031
Daniel Sonntag, Christian Schulz, Christian Reuschling, Luis Galárraga
With RadSpeech, we aim to build the next generation of intelligent, scalable, and user-friendly semantic search interfaces for the medical imaging domain, based on semantic technologies. Ontology-based knowledge representation is used not only for the image contents, but also for the complex natural language understanding and dialogue management process. This demo shows a speech-based annotation system for radiology images and focuses on a new and effective way to annotate medical image regions with a specific medical, structured, diagnosis while using speech and pointing gestures on the go.
通过RadSpeech,我们的目标是基于语义技术,为医学成像领域构建下一代智能、可扩展和用户友好的语义搜索界面。基于本体的知识表示不仅适用于图像内容,而且适用于复杂的自然语言理解和对话管理过程。本演示展示了一个基于语音的放射图像注释系统,并重点介绍了一种新的有效方法,在使用语音和指向手势的同时,对具有特定医学、结构化诊断的医学图像区域进行注释。
{"title":"RadSpeech's mobile dialogue system for radiologists","authors":"Daniel Sonntag, Christian Schulz, Christian Reuschling, Luis Galárraga","doi":"10.1145/2166966.2167031","DOIUrl":"https://doi.org/10.1145/2166966.2167031","url":null,"abstract":"With RadSpeech, we aim to build the next generation of intelligent, scalable, and user-friendly semantic search interfaces for the medical imaging domain, based on semantic technologies. Ontology-based knowledge representation is used not only for the image contents, but also for the complex natural language understanding and dialogue management process. This demo shows a speech-based annotation system for radiology images and focuses on a new and effective way to annotate medical image regions with a specific medical, structured, diagnosis while using speech and pointing gestures on the go.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90865979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
PINTER: interactive storytelling with physiological input PINTER:带有生理输入的互动故事叙述
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167039
Stephen W. Gilroy, J. Porteous, Fred Charles, M. Cavazza
The dominant interaction paradigm in Interactive Storytelling (IS) systems so far has been active interventions by the user by means of a variety of modalities. PINTER is an IS system that uses physiological inputs - surface electromyography (EMG) and galvanic skin response (GSR) [1] - as a form of passive interaction, opening up the possibility of the use of traditional filmic techniques [2, 3] to implement IS without requiring immersion-breaking interactive responses. The goal of this demonstration is to illustrate the ways in which passive interaction combined with filmic visualisation, dialogue and music, and a plan-based narrative generation approach can form a new basis for an adaptive interactive narrative.
到目前为止,在交互式故事叙述(IS)系统中占主导地位的交互范式是用户通过各种方式的积极干预。PINTER是一种使用生理输入(表面肌电图(EMG)和皮肤电反应(GSR)[1])作为被动交互形式的is系统,开辟了使用传统电影技术[2,3]来实现is的可能性,而不需要打破沉浸式交互响应。这个演示的目的是说明被动互动与电影视觉化、对话和音乐结合的方式,以及基于计划的叙事生成方法可以形成适应性互动叙事的新基础。
{"title":"PINTER: interactive storytelling with physiological input","authors":"Stephen W. Gilroy, J. Porteous, Fred Charles, M. Cavazza","doi":"10.1145/2166966.2167039","DOIUrl":"https://doi.org/10.1145/2166966.2167039","url":null,"abstract":"The dominant interaction paradigm in Interactive Storytelling (IS) systems so far has been active interventions by the user by means of a variety of modalities. PINTER is an IS system that uses physiological inputs - surface electromyography (EMG) and galvanic skin response (GSR) [1] - as a form of passive interaction, opening up the possibility of the use of traditional filmic techniques [2, 3] to implement IS without requiring immersion-breaking interactive responses. The goal of this demonstration is to illustrate the ways in which passive interaction combined with filmic visualisation, dialogue and music, and a plan-based narrative generation approach can form a new basis for an adaptive interactive narrative.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81878496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Exploring passive user interaction for adaptive narratives 探索被动用户交互的适应性叙事
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166990
Stephen W. Gilroy, J. Porteous, Fred Charles, M. Cavazza
Previous Interactive Storytelling systems have been designed to allow active user intervention in an unfolding story, using established multi-modal interactive techniques to influence narrative development. In this paper we instead explore the use of a form of passive interaction where users' affective responses, measured by physiological proxies, drive a process of narrative adaptation. We introduce a system that implements a passive interaction loop as part of narrative generation, monitoring users' physiological responses to an on-going narrative visualization and using these to adapt the subsequent development of character relationships, narrative focus and pacing. Idiomatic cinematographic techniques applied to the visualization utilize existing theories of establishing characteristic emotional tone and viewer expectations to foster additional user response. Experimental results support the applicability of filmic emotional theories in a non-film visual realization, demonstrating significant appropriate user physiological response to narrative events and "emotional cues". The subsequent narrative adaptation provides a variation of viewing experience with no loss of narrative comprehension.
之前的互动式故事叙述系统设计允许用户主动介入展开的故事,使用已建立的多模式互动技术来影响叙事发展。在本文中,我们探索了一种被动交互形式的使用,其中用户的情感反应,通过生理代理来衡量,驱动叙事适应的过程。我们引入了一个系统,该系统将被动互动循环作为叙事生成的一部分,监测用户对正在进行的叙事可视化的生理反应,并利用这些反应来适应角色关系、叙事焦点和节奏的后续发展。惯用的电影技术应用于可视化利用现有的理论,建立特色的情感基调和观众的期望,以促进额外的用户反应。实验结果支持电影情感理论在非电影视觉实现中的适用性,显示了对叙事事件和“情感线索”的显著适当的用户生理反应。随后的叙事改编在不丧失叙事理解的前提下提供了一种不同的观看体验。
{"title":"Exploring passive user interaction for adaptive narratives","authors":"Stephen W. Gilroy, J. Porteous, Fred Charles, M. Cavazza","doi":"10.1145/2166966.2166990","DOIUrl":"https://doi.org/10.1145/2166966.2166990","url":null,"abstract":"Previous Interactive Storytelling systems have been designed to allow active user intervention in an unfolding story, using established multi-modal interactive techniques to influence narrative development. In this paper we instead explore the use of a form of passive interaction where users' affective responses, measured by physiological proxies, drive a process of narrative adaptation. We introduce a system that implements a passive interaction loop as part of narrative generation, monitoring users' physiological responses to an on-going narrative visualization and using these to adapt the subsequent development of character relationships, narrative focus and pacing. Idiomatic cinematographic techniques applied to the visualization utilize existing theories of establishing characteristic emotional tone and viewer expectations to foster additional user response. Experimental results support the applicability of filmic emotional theories in a non-film visual realization, demonstrating significant appropriate user physiological response to narrative events and \"emotional cues\". The subsequent narrative adaptation provides a variation of viewing experience with no loss of narrative comprehension.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88390501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
A demo of a facial UI design approach for digital artists 数字艺术家面部UI设计方法的演示
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167026
Pedro Bastos, X. Alvarez, V. Orvalho
In the character animation industry, animators use facial UI's to animate a character's face. A facial UI provides widgets and handles that the animator interacts with to control the character's facial regions. This paper presents a facial UI design approach to control the animation of the six basic facial expressions of the anthropomorphic face. The design is based in square shaped widgets holding circular handles that allow the animator to produce the muscular activity relative to the basic facial expressions. We have implemented a prototype of the facial UI design in the Blender open-source animation software and did a preliminary pilot study with three animators. Two parameters were evaluated: the number of clicks and the time taken to animate the six basic facial expressions. The study reveals there was little variation in the values each animator marked for both parameters, despite the natural difference in their creative performance.
在角色动画行业中,动画师使用面部UI来动画角色的面部。面部UI提供小部件和处理,动画师与之交互以控制角色的面部区域。本文提出了一种人脸UI设计方法来控制拟人人脸的六个基本面部表情的动画。这个设计是基于正方形的小部件,这些小部件的手柄是圆形的,这样动画师就可以根据基本的面部表情产生肌肉活动。我们在Blender开源动画软件中实现了面部UI设计的原型,并与三位动画师进行了初步的试点研究。评估了两个参数:点击次数和使六个基本面部表情动画化所需的时间。研究表明,尽管每个动画师在创作表现上存在自然差异,但他们为这两个参数所标记的值几乎没有变化。
{"title":"A demo of a facial UI design approach for digital artists","authors":"Pedro Bastos, X. Alvarez, V. Orvalho","doi":"10.1145/2166966.2167026","DOIUrl":"https://doi.org/10.1145/2166966.2167026","url":null,"abstract":"In the character animation industry, animators use facial UI's to animate a character's face. A facial UI provides widgets and handles that the animator interacts with to control the character's facial regions. This paper presents a facial UI design approach to control the animation of the six basic facial expressions of the anthropomorphic face. The design is based in square shaped widgets holding circular handles that allow the animator to produce the muscular activity relative to the basic facial expressions. We have implemented a prototype of the facial UI design in the Blender open-source animation software and did a preliminary pilot study with three animators. Two parameters were evaluated: the number of clicks and the time taken to animate the six basic facial expressions. The study reveals there was little variation in the values each animator marked for both parameters, despite the natural difference in their creative performance.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73102978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A glove for tapping and discrete 1D/2D input 用于轻敲和离散1D/2D输入的手套
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166986
Sam Miller, A. Smith, Sina Bahram, R. Amant
This paper describes a glove with which users enter input by tapping fingertips with the thumb or by rubbing the thumb over the palmar surfaces of the middle and index fingers. The glove has been informally tested as the controller for two semi-autonomous robots in a a 3D simulation environment. A preliminary evaluation of the glove's performance is presented.
本文描述了一种手套,用户可以通过用拇指轻敲指尖或用拇指在中指和食指的手掌表面摩擦来输入。该手套已在3D模拟环境中作为两个半自动机器人的控制器进行了非正式测试。对该手套的性能进行了初步评价。
{"title":"A glove for tapping and discrete 1D/2D input","authors":"Sam Miller, A. Smith, Sina Bahram, R. Amant","doi":"10.1145/2166966.2166986","DOIUrl":"https://doi.org/10.1145/2166966.2166986","url":null,"abstract":"This paper describes a glove with which users enter input by tapping fingertips with the thumb or by rubbing the thumb over the palmar surfaces of the middle and index fingers. The glove has been informally tested as the controller for two semi-autonomous robots in a a 3D simulation environment. A preliminary evaluation of the glove's performance is presented.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77352158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
IUI. International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1