首页 > 最新文献

Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Detecting and leveraging finger orientation for interaction with direct-touch surfaces 检测和利用手指方向与直接触摸表面的交互
Feng Wang, Xiang Cao, Xiangshi Ren, Pourang Irani
Current interactions on direct-touch interactive surfaces are often modeled based on properties of the input channel that are common in traditional graphical user interfaces (GUI) such as x-y coordinate information. Leveraging additional information available on the surfaces could potentially result in richer and novel interactions. In this paper we specifically explore the role of finger orientation. This property is typically ignored in touch-based interactions partly because of the ambiguity in determining it solely from the contact shape. We present a simple algorithm that unambiguously detects the directed finger orientation vector in real-time from contact information only, by considering the dynamics of the finger landing process. Results of an experimental evaluation show that our algorithm is stable and accurate. We then demonstrate how finger orientation can be leveraged to enable novel interactions and to infer higher-level information such as hand occlusion or user position. We present a set of orientation-aware interaction techniques and widgets for direct-touch surfaces.
当前直接触摸交互表面上的交互通常基于传统图形用户界面(GUI)中常见的输入通道属性(如x-y坐标信息)进行建模。利用表面上可用的其他信息可能会产生更丰富和新颖的交互。在本文中,我们专门探讨了手指方向的作用。在基于触摸的交互中,这个属性通常被忽略,部分原因是仅仅从接触形状来确定它的模糊性。我们提出了一种简单的算法,通过考虑手指着陆过程的动力学,仅从接触信息中实时准确地检测手指方向矢量。实验结果表明,该算法稳定、准确。然后,我们展示了如何利用手指方向来实现新的交互,并推断出更高层次的信息,如手部遮挡或用户位置。我们提出了一套面向直接触摸表面的方向感知交互技术和小部件。
{"title":"Detecting and leveraging finger orientation for interaction with direct-touch surfaces","authors":"Feng Wang, Xiang Cao, Xiangshi Ren, Pourang Irani","doi":"10.1145/1622176.1622182","DOIUrl":"https://doi.org/10.1145/1622176.1622182","url":null,"abstract":"Current interactions on direct-touch interactive surfaces are often modeled based on properties of the input channel that are common in traditional graphical user interfaces (GUI) such as x-y coordinate information. Leveraging additional information available on the surfaces could potentially result in richer and novel interactions. In this paper we specifically explore the role of finger orientation. This property is typically ignored in touch-based interactions partly because of the ambiguity in determining it solely from the contact shape. We present a simple algorithm that unambiguously detects the directed finger orientation vector in real-time from contact information only, by considering the dynamics of the finger landing process. Results of an experimental evaluation show that our algorithm is stable and accurate. We then demonstrate how finger orientation can be leveraged to enable novel interactions and to infer higher-level information such as hand occlusion or user position. We present a set of orientation-aware interaction techniques and widgets for direct-touch surfaces.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"59 1","pages":"23-32"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84591133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 162
Relaxed selection techniques for querying time-series graphs 查询时间序列图的轻松选择技术
Christian Holz, Steven K. Feiner
Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.
时间序列图通常用于可视化随时间变化的现象。常见的任务包括比较不同时间点的值和搜索指定的模式,无论是精确的还是近似的。但是,支持时间序列图的工具通常将查询规范与实际搜索过程分开,允许用户仅在指定模式之后调整相似度。我们引入了轻松的选择技术,其中用户隐式地定义了可以在搜索模式中变化的相似程度,同时使用单手势交互创建搜索查询。用户绘制部分图形,通过与图形的空间偏差或绘制速度(时间偏差)建立相似程度。在一项用户研究中,参与者在使用我们暂时放松的选择技术时比使用传统技术时明显更快。此外,与传统方法相比,我们的空间放松选择技术获得了更高的精度和召回率。
{"title":"Relaxed selection techniques for querying time-series graphs","authors":"Christian Holz, Steven K. Feiner","doi":"10.1145/1622176.1622217","DOIUrl":"https://doi.org/10.1145/1622176.1622217","url":null,"abstract":"Time-series graphs are often used to visualize phenomena that change over time. Common tasks include comparing values at different points in time and searching for specified patterns, either exact or approximate. However, tools that support time-series graphs typically separate query specification from the actual search process, allowing users to adapt the level of similarity only after specifying the pattern. We introduce relaxed selection techniques, in which users implicitly define a level of similarity that can vary across the search pattern, while creating a search query with a single-gesture interaction. Users sketch over part of the graph, establishing the level of similarity through either spatial deviations from the graph, or the speed at which they sketch (temporal deviations). In a user study, participants were significantly faster when using our temporally relaxed selection technique than when using traditional techniques. In addition, they achieved significantly higher precision and recall with our spatially relaxed selection technique compared to traditional techniques.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"213-222"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72863944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Ripples: utilizing per-contact visualizations to improve user interaction with touch displays 波纹:利用每个接触的可视化来改善用户与触摸显示器的交互
Daniel J. Wigdor, Sarah Williams, Michael Cronin, R. Levy, Katie White, Maxim Mazeev, Hrvoje Benko
We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.
我们提出了一个名为ripple的系统,它可以在触摸显示器上的每个接触点周围实现可视化,并通过这些可视化,向用户提供有关触摸交互成功和错误的反馈。我们的可视化系统被设计成覆盖在现有应用程序之上,而不需要以任何方式修改应用程序,并且独立于应用程序对用户输入的响应而运行。当一个动作导致一个意想不到的行为时,涟漪减少了反馈模糊的基本问题。这种歧义可以由各种各样的来源引起。我们描述了歧义问题,并确定了这些来源。然后,我们定义了一组视觉状态和过渡,以解决这种模糊性,这对任何设计触摸应用程序或系统的人都很有用。然后,我们展示了这些状态的可视化的ripple实现,以及用户研究的结果,展示了用户对系统的偏好,并展示了它在减少错误方面的效用。
{"title":"Ripples: utilizing per-contact visualizations to improve user interaction with touch displays","authors":"Daniel J. Wigdor, Sarah Williams, Michael Cronin, R. Levy, Katie White, Maxim Mazeev, Hrvoje Benko","doi":"10.1145/1622176.1622180","DOIUrl":"https://doi.org/10.1145/1622176.1622180","url":null,"abstract":"We present Ripples, a system which enables visualizations around each contact point on a touch display and, through these visualizations, provides feedback to the user about successes and errors of their touch interactions. Our visualization system is engineered to be overlaid on top of existing applications without requiring the applications to be modified in any way, and functions independently of the application's responses to user input. Ripples reduces the fundamental problem of ambiguity of feedback when an action results in an unexpected behaviour. This ambiguity can be caused by a wide variety of sources. We describe the ambiguity problem, and identify those sources. We then define a set of visual states and transitions needed to resolve this ambiguity, of use to anyone designing touch applications or systems. We then present the Ripples implementation of visualizations for those states, and the results of a user study demonstrating user preference for the system, and demonstrating its utility in reducing errors.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"5 1","pages":"3-12"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72905284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
The web page as a WYSIWYG end-user customizable database-backed information management application 该网页作为一个所见即所得的最终用户可定制的数据库支持的信息管理应用程序
David R Karger, S. Ostler, Ryan Lee
Dido is an application (and application development environment) in a web page. It is a single web page containing rich structured data, an AJAXy interactive visualizer/editor for that data, and a "metaeditor" for WYSIWYG editing of the visualizer/editor. Historically, users have been limited to the data schemas, visualizations, and interactions offered by a small number of heavyweight applications. In contrast, Dido encourages and enables the end user to edit (not code) in his or her web browser a distinct ephemeral interaction "wrapper" for each data collection that is specifically suited to its intended use. Dido's active document metaphor has been explored before but we show how, given today's web infrastructure, it can be deployed in a small self-contained HTML document without touching a web client or server.
Dido是web页面中的应用程序(和应用程序开发环境)。它是一个单一的网页,包含丰富的结构化数据,一个ajax交互式可视化工具/编辑器用于该数据,以及一个“元编辑器”用于可视化工具/编辑器的所见即所得编辑。从历史上看,用户一直受限于少数重量级应用程序提供的数据模式、可视化和交互。相反,Dido鼓励并允许最终用户在他或她的web浏览器中为每个数据集合编辑(不是代码)一个特别适合其预期用途的独特的短暂交互“包装器”。之前已经对Dido的活动文档比喻进行了探索,但我们展示了如何在今天的web基础设施下,将其部署到一个小型的自包含HTML文档中,而无需接触web客户端或服务器。
{"title":"The web page as a WYSIWYG end-user customizable database-backed information management application","authors":"David R Karger, S. Ostler, Ryan Lee","doi":"10.1145/1622176.1622223","DOIUrl":"https://doi.org/10.1145/1622176.1622223","url":null,"abstract":"Dido is an application (and application development environment) in a web page. It is a single web page containing rich structured data, an AJAXy interactive visualizer/editor for that data, and a \"metaeditor\" for WYSIWYG editing of the visualizer/editor. Historically, users have been limited to the data schemas, visualizations, and interactions offered by a small number of heavyweight applications. In contrast, Dido encourages and enables the end user to edit (not code) in his or her web browser a distinct ephemeral interaction \"wrapper\" for each data collection that is specifically suited to its intended use. Dido's active document metaphor has been explored before but we show how, given today's web infrastructure, it can be deployed in a small self-contained HTML document without touching a web client or server.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"257-260"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88647259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Perceptual interpretation of ink annotations on line charts 折线图上墨注的感性解读
Nicholas Kong, Maneesh Agrawala
Asynchronous collaborators often use freeform ink annotations to point to visually salient perceptual features of line charts such as peaks or humps, valleys, rising slopes and declining slopes. We present a set of techniques for interpreting such annotations to algorithmically identify the corresponding perceptual parts. Our approach is to first apply a parts-based segmentation algorithm that identifies the visually salient perceptual parts in the chart. Our system then analyzes the freeform annotations to infer the corresponding peaks, valleys or sloping segments. Once the system has identified the perceptual parts it can highlight them to draw further attention and reduce ambiguity of interpretation in asynchronous collaborative discussions.
异步协作者通常使用自由格式的墨水注释来指出折线图的视觉上显著的感知特征,如峰或峰、谷、上升斜率和下降斜率。我们提出了一套解释这种注释的技术,以算法识别相应的感知部分。我们的方法是首先应用基于部件的分割算法,识别图表中视觉上显著的感知部件。然后,我们的系统分析自由格式的注释,以推断相应的峰,谷或倾斜段。一旦系统确定了感知部分,它就可以突出显示它们以引起进一步的注意,并减少异步协作讨论中解释的模糊性。
{"title":"Perceptual interpretation of ink annotations on line charts","authors":"Nicholas Kong, Maneesh Agrawala","doi":"10.1145/1622176.1622219","DOIUrl":"https://doi.org/10.1145/1622176.1622219","url":null,"abstract":"Asynchronous collaborators often use freeform ink annotations to point to visually salient perceptual features of line charts such as peaks or humps, valleys, rising slopes and declining slopes. We present a set of techniques for interpreting such annotations to algorithmically identify the corresponding perceptual parts. Our approach is to first apply a parts-based segmentation algorithm that identifies the visually salient perceptual parts in the chart. Our system then analyzes the freeform annotations to infer the corresponding peaks, valleys or sloping segments. Once the system has identified the perceptual parts it can highlight them to draw further attention and reduce ambiguity of interpretation in asynchronous collaborative discussions.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"29 1","pages":"233-236"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91225529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Virtual shelves: interactions with orientation aware devices 虚拟货架:与方向感知设备的交互
F. Li, David Dearman, K. Truong
Triggering shortcuts or actions on a mobile device often requires a long sequence of key presses. Because the functions of buttons are highly dependent on the current application's context, users are required to look at the display during interaction, even in many mobile situations when eyes-free interactions may be preferable. We present Virtual Shelves, a technique to trigger programmable shortcuts that leverages the user's spatial awareness and kinesthetic memory. With Virtual Shelves, the user triggers shortcuts by orienting a spatially-aware mobile device within the circular hemisphere in front of her. This space is segmented into definable and selectable regions along the phi and theta planes. We show that users can accurately point to 7 regions on the theta and 4 regions on the phi plane using only their kinesthetic memory. Building upon these results, we then evaluate a proof-of-concept prototype of the Virtual Shelves using a Nokia N93. The results show that Virtual Shelves is faster than the N93's native interface for common mobile phone tasks.
在移动设备上触发快捷键或操作通常需要长时间的按键。由于按钮的功能高度依赖于当前应用程序的上下文,因此在交互过程中需要用户查看显示,即使在许多无需眼睛交互的移动环境中也是如此。我们提出虚拟货架,一种触发可编程的快捷方式,利用用户的空间意识和动觉记忆的技术。在虚拟货架上,用户通过将具有空间感知能力的移动设备定位在她面前的圆形半球内来触发快捷方式。这个空间沿着和平面被分割成可定义和可选择的区域。我们的研究表明,用户仅使用他们的动觉记忆就可以准确地指向theta平面上的7个区域和phi平面上的4个区域。在这些结果的基础上,我们然后使用诺基亚N93评估虚拟货架的概念验证原型。结果表明,对于普通的手机任务,虚拟货架比N93的本机界面要快。
{"title":"Virtual shelves: interactions with orientation aware devices","authors":"F. Li, David Dearman, K. Truong","doi":"10.1145/1622176.1622200","DOIUrl":"https://doi.org/10.1145/1622176.1622200","url":null,"abstract":"Triggering shortcuts or actions on a mobile device often requires a long sequence of key presses. Because the functions of buttons are highly dependent on the current application's context, users are required to look at the display during interaction, even in many mobile situations when eyes-free interactions may be preferable. We present Virtual Shelves, a technique to trigger programmable shortcuts that leverages the user's spatial awareness and kinesthetic memory. With Virtual Shelves, the user triggers shortcuts by orienting a spatially-aware mobile device within the circular hemisphere in front of her. This space is segmented into definable and selectable regions along the phi and theta planes. We show that users can accurately point to 7 regions on the theta and 4 regions on the phi plane using only their kinesthetic memory. Building upon these results, we then evaluate a proof-of-concept prototype of the Virtual Shelves using a Nokia N93. The results show that Virtual Shelves is faster than the N93's native interface for common mobile phone tasks.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"47 1","pages":"125-128"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82091516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 138
Bonfire: a nomadic system for hybrid laptop-tabletop interaction Bonfire:一个用于笔记本电脑和桌面电脑混合交互的游牧系统
Shaun K. Kane, Daniel Avrahami, J. Wobbrock, B. Harrison, Adam D. Rea, Matthai Philipose, A. LaMarca
We present Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. We describe Bonfire's architecture, and offer scenarios that highlight Bonfire's advantages. We also include lessons learned and insights for further development and use.
我们展示了Bonfire,一个独立的移动计算系统,它使用两台安装在笔记本电脑上的激光微型投影仪,在笔记本电脑键盘的两侧投射一个交互式显示空间。与每个微型投影仪相结合的是一个摄像头,用于在投影空间内实现手势跟踪、物体识别和信息传输。因此,Bonfire既不是一个纯粹的笔记本电脑系统,也不是一个纯粹的桌面系统,而是将两者集成到一个新的移动计算平台中。这种集成(1)能够观察周边并适当响应,例如,在其视野内随意放置物体;(2)通过计算机视觉实现物理对象和数字对象之间的集成;(3)提供与通常的垂直笔记本电脑显示器相结合的水平表面,允许直接指向和手势;(4)扩大输入/输出空间,以丰富现有应用程序。我们描述了Bonfire的架构,并提供了突出Bonfire优势的场景。我们还包括了进一步开发和使用的经验教训和见解。
{"title":"Bonfire: a nomadic system for hybrid laptop-tabletop interaction","authors":"Shaun K. Kane, Daniel Avrahami, J. Wobbrock, B. Harrison, Adam D. Rea, Matthai Philipose, A. LaMarca","doi":"10.1145/1622176.1622202","DOIUrl":"https://doi.org/10.1145/1622176.1622202","url":null,"abstract":"We present Bonfire, a self-contained mobile computing system that uses two laptop-mounted laser micro-projectors to project an interactive display space to either side of a laptop keyboard. Coupled with each micro-projector is a camera to enable hand gesture tracking, object recognition, and information transfer within the projected space. Thus, Bonfire is neither a pure laptop system nor a pure tabletop system, but an integration of the two into one new nomadic computing platform. This integration (1) enables observing the periphery and responding appropriately, e.g., to the casual placement of objects within its field of view, (2) enables integration between physical and digital objects via computer vision, (3) provides a horizontal surface in tandem with the usual vertical laptop display, allowing direct pointing and gestures, and (4) enlarges the input/output space to enrich existing applications. We describe Bonfire's architecture, and offer scenarios that highlight Bonfire's advantages. We also include lessons learned and insights for further development and use.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"120 1","pages":"129-138"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87956975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 133
TapSongs: tapping rhythm-based passwords on a single binary sensor TapSongs:在单个二进制传感器上轻敲基于节奏的密码
J. Wobbrock
TapSongs are presented, which enable user authentication on a single "binary" sensor (e.g., button) by matching the rhythm of tap down/up events to a jingle timing model created by the user. We describe our matching algorithm, which employs absolute match criteria and learns from successful logins. We also present a study of 10 subjects showing that after they created their own TapSong models from 12 examples (< 2 minutes), their subsequent login attempts were 83.2% successful. Furthermore, aural and visual eavesdropping of the experimenter's logins resulted in only 10.7% successful imposter logins by subjects. Even when subjects heard the target jingles played by a synthesized piano, they were only 19.4% successful logging in as imposters. These results are attributable to subtle but reliable individual differences in people's tapping, which are supported by prior findings in music psychology.
TapSongs被提出,它通过将按下/按上事件的节奏与用户创建的叮当声计时模型相匹配,在单个“二进制”传感器(例如,按钮)上启用用户身份验证。我们描述了我们的匹配算法,该算法采用绝对匹配标准并从成功登录中学习。我们还提出了一项针对10名受试者的研究,表明在他们从12个示例(< 2分钟)中创建自己的TapSong模型后,他们随后的登录尝试成功率为83.2%。此外,实验人员登录的视听窃听导致只有10.7%的被试成功冒名登录。即使实验对象听到了由合成钢琴演奏的目标广告歌,他们也只有19.4%的人成功冒充者登录。这些结果可归因于人们敲击的细微但可靠的个体差异,这得到了音乐心理学先前研究结果的支持。
{"title":"TapSongs: tapping rhythm-based passwords on a single binary sensor","authors":"J. Wobbrock","doi":"10.1145/1622176.1622194","DOIUrl":"https://doi.org/10.1145/1622176.1622194","url":null,"abstract":"TapSongs are presented, which enable user authentication on a single \"binary\" sensor (e.g., button) by matching the rhythm of tap down/up events to a jingle timing model created by the user. We describe our matching algorithm, which employs absolute match criteria and learns from successful logins. We also present a study of 10 subjects showing that after they created their own TapSong models from 12 examples (< 2 minutes), their subsequent login attempts were 83.2% successful. Furthermore, aural and visual eavesdropping of the experimenter's logins resulted in only 10.7% successful imposter logins by subjects. Even when subjects heard the target jingles played by a synthesized piano, they were only 19.4% successful logging in as imposters. These results are attributable to subtle but reliable individual differences in people's tapping, which are supported by prior findings in music psychology.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"93-96"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77241892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Contact area interaction with sliding widgets 与滑动小部件的接触区域交互
T. Moscovich
We show how to design touchscreen widgets that respond to a finger's contact area. In standard touchscreen systems a finger often appears to touch several screen objects, but the system responds as though only a single pixel is touched. In contact area interaction all objects under the finger respond to the touch. Users activate control widgets by sliding a movable element, as though flipping a switch. These Sliding Widgets resolve selection ambiguity and provide designers with a rich vocabulary of self-disclosing interaction mechanism. We showcase the design of several types of Sliding Widgets, and report study results showing that the simplest of these widgets, the Sliding Button, performs on-par with medium-sized pushbuttons and offers greater accuracy for small-sized buttons.
我们将展示如何设计能够响应手指接触区域的触摸屏小部件。在标准的触屏系统中,一个手指似乎经常触摸到几个屏幕对象,但系统的反应好像只触摸到一个像素。在接触区交互中,手指下的所有物体都会对触摸做出反应。用户通过滑动可移动元素来激活控件控件,就像拨动开关一样。这些滑动小部件解决了选择歧义,并为设计人员提供了丰富的自我披露交互机制词汇表。我们展示了几种类型的滑动部件的设计,并报告了研究结果,表明这些部件中最简单的滑动按钮与中型按钮的性能相当,并且为小型按钮提供了更高的精度。
{"title":"Contact area interaction with sliding widgets","authors":"T. Moscovich","doi":"10.1145/1622176.1622181","DOIUrl":"https://doi.org/10.1145/1622176.1622181","url":null,"abstract":"We show how to design touchscreen widgets that respond to a finger's contact area. In standard touchscreen systems a finger often appears to touch several screen objects, but the system responds as though only a single pixel is touched. In contact area interaction all objects under the finger respond to the touch. Users activate control widgets by sliding a movable element, as though flipping a switch. These Sliding Widgets resolve selection ambiguity and provide designers with a rich vocabulary of self-disclosing interaction mechanism. We showcase the design of several types of Sliding Widgets, and report study results showing that the simplest of these widgets, the Sliding Button, performs on-par with medium-sized pushbuttons and offers greater accuracy for small-sized buttons.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"20 1","pages":"13-22"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78720947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 76
Optically sensing tongue gestures for computer input 用于计算机输入的光学感应舌头手势
T. S. Saponas, D. Kelly, B. Parviz, Desney S. Tan
Many patients with paralyzing injuries or medical conditions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. While researchers have explored eye-tracking and speech technologies for these patients, we believe there is potential for directly sensing explicit tongue movement for controlling computers. In this paper, we describe a novel approach of using infrared optical sensors embedded within a dental retainer to sense tongue gestures. We describe an experiment showing our system effectively discriminating between four simple gestures with over 90% accuracy. In this experiment, users were also able to play the popular game Tetris with their tongues. Finally, we present lessons learned and opportunities for future work.
许多有瘫痪性损伤或疾病的病人保留了脑神经的功能,脑神经控制着眼睛、下巴和舌头。虽然研究人员已经为这些患者探索了眼球追踪和语音技术,但我们相信直接感知舌头的显性运动来控制电脑是有潜力的。在本文中,我们描述了一种使用嵌入在牙齿保持器中的红外光学传感器来感知舌头手势的新方法。我们描述了一个实验,表明我们的系统有效地区分了四种简单的手势,准确率超过90%。在这个实验中,用户还可以用舌头玩流行的俄罗斯方块游戏。最后,我们提出了经验教训和未来工作的机会。
{"title":"Optically sensing tongue gestures for computer input","authors":"T. S. Saponas, D. Kelly, B. Parviz, Desney S. Tan","doi":"10.1145/1622176.1622209","DOIUrl":"https://doi.org/10.1145/1622176.1622209","url":null,"abstract":"Many patients with paralyzing injuries or medical conditions retain the use of their cranial nerves, which control the eyes, jaw, and tongue. While researchers have explored eye-tracking and speech technologies for these patients, we believe there is potential for directly sensing explicit tongue movement for controlling computers. In this paper, we describe a novel approach of using infrared optical sensors embedded within a dental retainer to sense tongue gestures. We describe an experiment showing our system effectively discriminating between four simple gestures with over 90% accuracy. In this experiment, users were also able to play the popular game Tetris with their tongues. Finally, we present lessons learned and opportunities for future work.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"17 1","pages":"177-180"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72986075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
期刊
Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1