首页 > 最新文献

Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

英文 中文
A support to multi-devices web application 支持多设备的web应用程序
Xavier Le Pallec, Raphaël Marvie, J. Rouillard, Jean-Claude Tarby
Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.
编写一个使用位于不同终端上的交互设备的应用程序并不容易。使用标准Web技术(HTTP、Javascript、Web浏览器)编写这样的应用程序就更加困难了。然而,Web应用程序有一些有趣的特性,比如运行在非常不同的终端上、没有特定的安装步骤、能够在运行时改进应用程序代码。我们的演示展示了对设计多设备Web应用程序的支持。在介绍了本研究的背景之后,我们简要描述了与多设备web应用程序设计相关的一些问题。然后,我们介绍了我们实现的工具包,以帮助开发基于远程交互设备的应用程序。
{"title":"A support to multi-devices web application","authors":"Xavier Le Pallec, Raphaël Marvie, J. Rouillard, Jean-Claude Tarby","doi":"10.1145/1866218.1866235","DOIUrl":"https://doi.org/10.1145/1866218.1866235","url":null,"abstract":"Programming an application which uses interactive devices located on different terminals is not easy. Programming such applications with standard Web technologies (HTTP, Javascript, Web browser) is even more difficult. However, Web applications have interesting properties like running on very different terminals, the lack of a specific installation step, the ability to evolve the application code at runtime. Our demonstration presents a support for designing multi-devices Web applications. After introducing the context of this work, we briefly describe some problems related to the design of multi-devices web application. Then, we present the toolkit we have implemented to help the development of applications based upon distant interactive devices.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"391-392"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90446483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
OnObject: gestural play with tagged everyday objects OnObject:用标记的日常物品进行手势游戏
Keywon Chung, Michael Shilman, C. Merrill, H. Ishii
Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.
许多有形用户界面(TUI)系统使用配备传感器的物理对象。然而,它们不容易扩展到用户的实际环境;大多数日常用品缺乏必要的硬件,修改需要由熟练的个人开发硬件和软件。这限制了最终用户创建TUI,导致界面不灵活,其中传感器输入和输出事件的映射不能容易地修改,以反映最终用户的愿望和情况。我们介绍了OnObject,一个戴在手上的小设备,它可以对物理对象进行编程,以响应一组手势触发。用户将RFID标签贴在定位对象上,通过标签抓取它们,并通过内置按钮和麦克风编程它们的响应,以抓取、释放、摇动、摇摆和推力手势。在本文中,我们演示了包括学龄前儿童在内的新手终端用户如何使用来自玩具、图纸或粘土的声音反馈立即创建引人入胜的手势对象界面。
{"title":"OnObject: gestural play with tagged everyday objects","authors":"Keywon Chung, Michael Shilman, C. Merrill, H. Ishii","doi":"10.1145/1866218.1866229","DOIUrl":"https://doi.org/10.1145/1866218.1866229","url":null,"abstract":"Many Tangible User Interface (TUI) systems employ sensor-equipped physical objects. However they do not easily scale to users' actual environments; most everyday objects lack the necessary hardware, and modification requires hardware and software development by skilled individuals. This limits TUI creation by end users, resulting in inflexible interfaces in which the mapping of sensor input and output events cannot be easily modified reflecting the end user's wishes and circumstances. We introduce OnObject, a small device worn on the hand, which can program physical objects to respond to a set of gestural triggers. Users attach RFID tags to situated objects, grab them by the tag, and program their responses to grab, release, shake, swing, and thrust gestures using a built-in button and a microphone. In this paper, we demonstrate how novice end users including preschool children can instantly create engaging gestural object interfaces with sound feedback from toys, drawings, or clay.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"379-380"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89848474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
DoubleFlip: a motion gesture delimiter for interaction DoubleFlip:用于交互的动作手势分隔符
J. Ruiz, Yang Li
In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.
为了在移动设备上使用动作手势,设备必须能够区分输入动作和日常动作。在这个摘要中,我们介绍了DoubleFlip,一个独特的运动手势,用于作为移动运动手势的输入分隔符。我们证明了DoubleFlip手势对假阳性条件具有极强的抵抗力,同时仍然实现了很高的识别精度。由于DoubleFlip易于执行并且不太可能被意外调用,因此它为移动交互提供了一个始终活跃的输入事件。
{"title":"DoubleFlip: a motion gesture delimiter for interaction","authors":"J. Ruiz, Yang Li","doi":"10.1145/1866218.1866265","DOIUrl":"https://doi.org/10.1145/1866218.1866265","url":null,"abstract":"In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"449-450"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89474452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
User guided audio selection from complex sound mixtures 用户引导音频选择从复杂的声音混合
P. Smaragdis
In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.
在本文中,我们提出了一个新的接口来选择声音混合。音频编辑器中的传统界面提供声音的图形表示,它要么是波形,要么是时间/频率变换的某种变体。尽管使用这些表示,用户可能能够在视觉上识别混合声音的元素,但它们不能促进特定对象的编辑(例如,仅选择歌曲中歌手的声音)。这个界面使用来自用户的音频指导,以便在混音中选择目标声音。用户被要求发出所需的目标声音(或以其他方式用声音表示),然后一个自动过程识别并隔离与用户输入最相关的混合元素。这种指向音频流的特定部分的方法允许用户执行音频选择,否则这将是不可行的。
{"title":"User guided audio selection from complex sound mixtures","authors":"P. Smaragdis","doi":"10.1145/1622176.1622193","DOIUrl":"https://doi.org/10.1145/1622176.1622193","url":null,"abstract":"In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"89-92"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73491891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines 在现实HCI设置中使用fNIRS脑传感:实验和指南
E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob
Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.
由于功能性近红外光谱(fNIRS)减轻了其他大脑传感器的许多限制,它有可能为HCI研究开辟新的可能性。根据我们在HCI中使用fNIRS技术的经验,我们确定了几个注意事项,并提供了在实际HCI实验室环境中使用fNIRS的指导方针。我们实证研究了典型的人类行为(如头部和面部运动)或计算机交互(如键盘和鼠标的使用)是否会干扰使用近红外光谱的大脑测量。基于我们的研究结果,我们确定了计算机使用中固有的哪些物理行为会干扰准确的近红外感知认知状态信息,哪些可以在数据分析中纠正,哪些是可以接受的。通过这些发现,我们希望促进fNIRS脑传感技术在HCI研究中的进一步应用。
{"title":"Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines","authors":"E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob","doi":"10.1145/1622176.1622207","DOIUrl":"https://doi.org/10.1145/1622176.1622207","url":null,"abstract":"Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"157-166"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83576584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
A screen-space formulation for 2D and 3D direct manipulation 用于2D和3D直接操作的屏幕空间公式
J. Reisman, Philip L. Davidson, Jefferson Y. Han
Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.
在单点触控和多点触控环境中,旋转-缩放-平移(RST)交互已经成为与二维(2D)环境交互的事实上的标准。因为到目前为止RST的使用几乎完全集中在二维,还没有将这些原则扩展到三维的标准技术。在本文中,我们描述了一种屏幕空间方法,它完全捕获了传统2D RST多点触摸交互的语义,但也允许我们将这些相同的原则扩展到三维(3D)交互中。就像RST允许用户使用两个或更多的点直接操作2D上下文一样,我们的方法允许用户使用三个或更多的点直接操作3D对象。我们展示了一些新的相互作用,这些相互作用考虑到了透视,因此在正字法环境中不可用。此外,我们确定了在3D中执行直接操作时出现的关键歧义和意外行为,并提供解决方案来减轻每个问题所带来的困难。最后,我们将展示如何扩展我们的方法以满足特定于应用程序的控制目标,并展示我们的方法在一些示例环境中工作。
{"title":"A screen-space formulation for 2D and 3D direct manipulation","authors":"J. Reisman, Philip L. Davidson, Jefferson Y. Han","doi":"10.1145/1622176.1622190","DOIUrl":"https://doi.org/10.1145/1622176.1622190","url":null,"abstract":"Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"69-78"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85350830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 207
A practical pressure sensitive computer keyboard 一种实用的压敏计算机键盘
P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche
A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.
提出了一种压敏计算机键盘,它可以独立地感知每个按下的键上的力度。该设计利用了现有的膜技术,适用于低成本、大批量生产。讨论了一些有代表性的应用。
{"title":"A practical pressure sensitive computer keyboard","authors":"P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche","doi":"10.1145/1622176.1622187","DOIUrl":"https://doi.org/10.1145/1622176.1622187","url":null,"abstract":"A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76988800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices SemFeel:为移动触摸屏设备提供语义触觉反馈的用户界面
K. Yatani, K. Truong
One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.
使用移动触屏设备的挑战之一是,它们不能向用户提供触觉反馈。因此,用户需要看着屏幕才能与这些设备进行交互。在本文中,我们介绍了SemFeel,这是一种触觉反馈系统,可以通知用户在屏幕上触摸物体的位置,并提供有关该物体的额外语义信息。通过安装在移动触摸屏设备背面的多个振动马达,SemFeel可以产生不同的振动模式,比如从右到左或从上到下的振动,以帮助用户与移动设备进行交互。通过两项用户研究,我们表明用户可以区分十种不同的模式,包括线性模式和圆形模式,准确率约为90%,并且SemFeel支持准确的无眼交互。
{"title":"SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices","authors":"K. Yatani, K. Truong","doi":"10.1145/1622176.1622198","DOIUrl":"https://doi.org/10.1145/1622176.1622198","url":null,"abstract":"One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"65 1","pages":"111-120"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72979757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 177
Enabling always-available input with muscle-computer interfaces 通过肌肉计算机接口实现始终可用的输入
T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay
Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.
先前的工作已经证明了应用离线分析来解释前臂肌电图(EMG)和在物理表面上对手指手势进行分类的可行性。我们扩展了这些结果,使我们更接近于在现实应用中使用肌肉计算机接口来提供始终可用的输入。我们利用现有的人类自然握力分类来开发一个手势集,涵盖自由空间中的交互,即使手忙于其他物体。我们提出了一个实时对这些手势进行分类的系统,并引入了一个双手动范例,使其能够在交互式系统中使用。我们报告的实验结果表明,在捏捏时,四指分类准确率平均为79%,拿着旅行杯时为85%,拿着加重的包时为88%。我们进一步展示了不同手臂姿势的普遍性,并探索了提供实时视觉反馈的权衡。
{"title":"Enabling always-available input with muscle-computer interfaces","authors":"T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay","doi":"10.1145/1622176.1622208","DOIUrl":"https://doi.org/10.1145/1622176.1622208","url":null,"abstract":"Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"22 1","pages":"167-176"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78719783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 325
Relaxed selection techniques for querying time-series graphs 查询时间序列图的轻松选择技术
Christian Holz, Steven K. Feiner
Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.
时间序列图通常用于可视化随时间变化的现象。常见的任务包括比较不同时间点的值和搜索指定的模式,无论是精确的还是近似的。但是,支持时间序列图的工具通常将查询规范与实际搜索过程分开,允许用户仅在指定模式之后调整相似度。我们引入了轻松的选择技术,其中用户隐式地定义了可以在搜索模式中变化的相似程度,同时使用单手势交互创建搜索查询。用户绘制部分图形,通过与图形的空间偏差或绘制速度(时间偏差)建立相似程度。在一项用户研究中,参与者在使用我们暂时放松的选择技术时比使用传统技术时明显更快。此外,与传统方法相比,我们的空间放松选择技术获得了更高的精度和召回率。
{"title":"Relaxed selection techniques for querying time-series graphs","authors":"Christian Holz, Steven K. Feiner","doi":"10.1145/1622176.1622217","DOIUrl":"https://doi.org/10.1145/1622176.1622217","url":null,"abstract":"Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"213-222"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72863944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
期刊
Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1