首页 > 最新文献

Proceedings of the 26th annual ACM symposium on User interface software and technology最新文献

英文 中文
MagGetz: customizable passive tangible controllers on and around conventional mobile devices MagGetz:传统移动设备上和周围可定制的被动有形控制器
Sungjae Hwang, Myungwook Ahn, K. Wohn
This paper proposes user-customizable passive control widgets, called MagGetz, which enable tangible interaction on and around mobile devices without requiring power or wireless connections. This is achieved by tracking and ana-lyzing the magnetic field generated by controllers attached on and around the device through a single magnetometer, which is commonly integrated in smartphones today. The proposed method provides users with a broader interaction area, customizable input layouts, richer physical clues, and higher input expressiveness without the need for hardware modifications. We have presented a software toolkit and several applications using MagGetz.
本文提出了用户可定制的被动控制部件,称为MagGetz,它可以在不需要电源或无线连接的情况下在移动设备上和周围进行切实的交互。这是通过跟踪和分析由连接在设备上和周围的控制器通过单个磁力计产生的磁场来实现的,该磁力计目前通常集成在智能手机中。该方法为用户提供了更广泛的交互区域、可定制的输入布局、更丰富的物理线索和更高的输入表现力,而无需修改硬件。我们提供了一个软件工具包和几个使用MagGetz的应用程序。
{"title":"MagGetz: customizable passive tangible controllers on and around conventional mobile devices","authors":"Sungjae Hwang, Myungwook Ahn, K. Wohn","doi":"10.1145/2501988.2501991","DOIUrl":"https://doi.org/10.1145/2501988.2501991","url":null,"abstract":"This paper proposes user-customizable passive control widgets, called MagGetz, which enable tangible interaction on and around mobile devices without requiring power or wireless connections. This is achieved by tracking and ana-lyzing the magnetic field generated by controllers attached on and around the device through a single magnetometer, which is commonly integrated in smartphones today. The proposed method provides users with a broader interaction area, customizable input layouts, richer physical clues, and higher input expressiveness without the need for hardware modifications. We have presented a software toolkit and several applications using MagGetz.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133328065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
YouMove: enhancing movement training with an augmented reality mirror YouMove:通过增强现实镜子增强运动训练
Fraser Anderson, Tovi Grossman, Justin Matejka, G. Fitzmaurice
YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.
YouMove是一个新颖的系统,允许用户记录和学习物理运动序列。录音系统设计简单,允许任何人创建和分享培训内容。训练系统使用记录的数据来训练使用大规模增强现实镜子的用户。该系统通过一系列阶段来训练用户,逐渐减少用户对指导和反馈的依赖。本文讨论了YouMove及其交互式镜像的设计与实现。我们还提出了一项用户研究,其中YouMove被证明比传统视频演示提高了2倍的学习和短期记忆。
{"title":"YouMove: enhancing movement training with an augmented reality mirror","authors":"Fraser Anderson, Tovi Grossman, Justin Matejka, G. Fitzmaurice","doi":"10.1145/2501988.2502045","DOIUrl":"https://doi.org/10.1145/2501988.2502045","url":null,"abstract":"YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131286893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 278
Bayesian touch: a statistical criterion of target selection with finger touch 贝叶斯触摸:手指触摸目标选择的统计准则
Xiaojun Bi, Shumin Zhai
To improve the accuracy of target selection for finger touch, we conceptualize finger touch input as an uncertain process, and derive a statistical target selection criterion, Bayesian Touch Criterion, by combining the basic Bayes' rule of probability with the generalized dual Gaussian distribution hypothesis of finger touch. The Bayesian Touch Criterion selects the intended target as the candidate with the shortest Bayesian Touch Distance to the touch point, which is computed from the touch point to the target center distance and the target size. We give the derivation of the Bayesian Touch Criterion and its empirical evaluation with two experiments. The results showed that for 2-dimensional circular target selection, the Bayesian Touch Criterion is significantly more accurate than the commonly used Visual Boundary Criterion (i.e., a target is selected if and only if the touch point falls within its boundary) and its two variants.
为了提高手指触摸目标选择的准确性,将手指触摸输入概念为一个不确定过程,并将基本贝叶斯概率规则与手指触摸的广义双高斯分布假设相结合,推导出统计目标选择准则——贝叶斯触摸准则。贝叶斯触摸准则选择拟目标作为候选目标,贝叶斯触摸距离从触摸点到目标中心距离和目标大小计算。通过两个实验给出了贝叶斯接触准则的推导及其经验评价。结果表明,对于二维圆形目标选择,贝叶斯触摸准则比常用的视觉边界准则(即当且仅当触摸点落在其边界内时选择目标)及其两种变体具有显著的准确性。
{"title":"Bayesian touch: a statistical criterion of target selection with finger touch","authors":"Xiaojun Bi, Shumin Zhai","doi":"10.1145/2501988.2502058","DOIUrl":"https://doi.org/10.1145/2501988.2502058","url":null,"abstract":"To improve the accuracy of target selection for finger touch, we conceptualize finger touch input as an uncertain process, and derive a statistical target selection criterion, Bayesian Touch Criterion, by combining the basic Bayes' rule of probability with the generalized dual Gaussian distribution hypothesis of finger touch. The Bayesian Touch Criterion selects the intended target as the candidate with the shortest Bayesian Touch Distance to the touch point, which is computed from the touch point to the target center distance and the target size. We give the derivation of the Bayesian Touch Criterion and its empirical evaluation with two experiments. The results showed that for 2-dimensional circular target selection, the Bayesian Touch Criterion is significantly more accurate than the commonly used Visual Boundary Criterion (i.e., a target is selected if and only if the touch point falls within its boundary) and its two variants.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116383678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Session details: Sensing 会话详细信息:传感
Chris Harrison
{"title":"Session details: Sensing","authors":"Chris Harrison","doi":"10.1145/3254703","DOIUrl":"https://doi.org/10.1145/3254703","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114664725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The drawing assistant: automated drawing guidance and feedback from photographs 绘图助手:自动绘图指导和照片反馈
Emmanuel Iarussi, A. Bousseau, Theophanis Tsandilas
We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to %support this task and help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw.
我们提出了一个交互式绘图工具,为模型照片提供自动指导,以帮助人们实践传统的观察绘画技术。绘画文献描述了许多技术来支持这项任务,并帮助人们获得对场景中的形状及其关系的意识。我们编译了这些技术,并从模型照片中自动提取出一组构造线。然后,我们在模型上显示这些线条,以指导用户在绘图画布上手动复制。最后,我们使用形状匹配来注册用户的草图与模型指南。我们使用此注册向用户提供纠正反馈。我们的用户研究表明,自动提取的构造线可以帮助用户更准确地绘制。此外,用户报告说,指导和纠正反馈帮助他们更好地理解如何绘制。
{"title":"The drawing assistant: automated drawing guidance and feedback from photographs","authors":"Emmanuel Iarussi, A. Bousseau, Theophanis Tsandilas","doi":"10.1145/2501988.2501997","DOIUrl":"https://doi.org/10.1145/2501988.2501997","url":null,"abstract":"We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to %support this task and help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125072363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
FingerPad: private and subtle interaction using fingertips 指垫:私密而微妙的指尖互动
Liwei Chan, Rong-Hao Liang, M. Tsai, K. Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng, Bing-Yu Chen
We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.
我们展示的是FingerPad,一种可以安装在指甲上的设备,它可以把食指的尖端变成一个触摸板,在移动中允许私人和微妙的互动。通过在食指指甲上添加霍尔传感器网格和缩略指甲上添加磁铁,FingerPad可以使用磁性跟踪实现触摸输入。由于它允许通过捏捏手势输入,FingerPad适合私人使用,因为手指在捏捏时的运动很微妙,自然被手隐藏起来。在功能上,FingerPad类似于触摸板,也允许裸眼使用。此外,由于必需的设备连接到指甲上,FingerPad保留了自然的触觉反馈,而不会影响指尖的固有功能。通过对用户的研究,我们分析了三个设计因素,即姿势、承诺方式和目标尺寸,来评估FingerPad的设计。虽然结果显示了一些因素之间的权衡,但在坐下条件下,参与者对非常小的目标(1.2mm-宽度)的准确率一般为93%,而在行走条件下,对2.5mm-宽度的目标的准确率为92%。
{"title":"FingerPad: private and subtle interaction using fingertips","authors":"Liwei Chan, Rong-Hao Liang, M. Tsai, K. Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng, Bing-Yu Chen","doi":"10.1145/2501988.2502016","DOIUrl":"https://doi.org/10.1145/2501988.2502016","url":null,"abstract":"We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125892710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 183
uTrack: 3D input using two magnetic sensors uTrack:使用两个磁传感器的3D输入
Ke-Yu Chen, Kent Lyons, Sean White, Shwetak N. Patel
While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.
虽然近年来在可穿戴计算方面取得了很大进展,但输入技术仍然是一个关键的挑战。在本文中,我们介绍了uTrack,一种利用磁场(MF)传感将拇指和手指转换成3D输入系统的技术。使用者在手指背面佩戴一对磁力计,在拇指背面贴上永磁体。通过在手指间移动拇指,我们获得了一个连续的输入流,可以用于3D指向。具体来说,我们的新算法直接从传感器读数计算磁铁的3D位置和倾斜角度。我们对uTrack作为输入设备进行了评估,显示出在3D空间中平均跟踪精度为4.84 mm -足以进行微妙的交互。我们还演示了一个实时原型和示例应用程序,允许用户使用3D手指输入与计算机交互。
{"title":"uTrack: 3D input using two magnetic sensors","authors":"Ke-Yu Chen, Kent Lyons, Sean White, Shwetak N. Patel","doi":"10.1145/2501988.2502035","DOIUrl":"https://doi.org/10.1145/2501988.2502035","url":null,"abstract":"While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115292389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 187
Lumitrack: low cost, high precision, high speed tracking with projected m-sequences Lumitrack:低成本,高精度,高速跟踪投影m序列
R. Xiao, Chris Harrison, Karl D. D. Willis, I. Poupyrev, S. Hudson
We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.
我们提出了Lumitrack,一种使用投影结构模式和线性光学传感器的新颖运动跟踪技术。每个传感器单元都能够在投影区域内恢复2D位置,而多个传感器可以组合在一起进行高达6个自由度(DOF)的跟踪。我们的结构光方法基于特殊的模式,称为m序列,其中m位的任何连续子序列都是唯一的。Lumitrack可以利用数字和静态投影仪,以及可扩展的嵌入式传感配置。由此产生的系统为广泛的交互式应用提供了高速、高精度和低成本的运动跟踪。我们详细介绍了这种方法的硬件、操作和性能特征,以及一系列示例应用程序,这些应用程序突出了其直接的可行性和实用性。
{"title":"Lumitrack: low cost, high precision, high speed tracking with projected m-sequences","authors":"R. Xiao, Chris Harrison, Karl D. D. Willis, I. Poupyrev, S. Hudson","doi":"10.1145/2501988.2502022","DOIUrl":"https://doi.org/10.1145/2501988.2502022","url":null,"abstract":"We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130133630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Haptic feedback design for a virtual button along force-displacement curves 基于力-位移曲线的虚拟按键触觉反馈设计
Sunjun Kim, Geehyuk Lee
In this paper, we present a haptic feedback method for a virtual button based on the force-displacement curves of a physical button. The original feature of the proposed method is that it provides haptic feedback, not only for the "click" sensation but also for the moving sensation before and after transition points in a force-displacement curve. The haptic feedback is by vibrotactile stimulations only and does not require a force feedback mechanism. We conducted user experiments to show that the resultant haptic feedback is realistic and distinctive. Participants were able to distinguish among six different virtual buttons, with 94.1% accuracy even in a noisy environment. In addition, participants were able to associate four virtual buttons with their physical counterparts, with a correct answer rate of 79.2%.
本文提出了一种基于物理按键力-位移曲线的虚拟按键触觉反馈方法。该方法的原始特征是不仅对“点击”感觉提供触觉反馈,而且对力-位移曲线中过渡点前后的移动感觉提供触觉反馈。触觉反馈仅通过振动触觉刺激,不需要力反馈机制。我们进行了用户实验,以证明所得到的触觉反馈是真实的和独特的。参与者能够区分六种不同的虚拟按钮,即使在嘈杂的环境中准确率也达到94.1%。此外,参与者能够将四个虚拟按钮与物理按钮联系起来,正确率为79.2%。
{"title":"Haptic feedback design for a virtual button along force-displacement curves","authors":"Sunjun Kim, Geehyuk Lee","doi":"10.1145/2501988.2502041","DOIUrl":"https://doi.org/10.1145/2501988.2502041","url":null,"abstract":"In this paper, we present a haptic feedback method for a virtual button based on the force-displacement curves of a physical button. The original feature of the proposed method is that it provides haptic feedback, not only for the \"click\" sensation but also for the moving sensation before and after transition points in a force-displacement curve. The haptic feedback is by vibrotactile stimulations only and does not require a force feedback mechanism. We conducted user experiments to show that the resultant haptic feedback is realistic and distinctive. Participants were able to distinguish among six different virtual buttons, with 94.1% accuracy even in a noisy environment. In addition, participants were able to associate four virtual buttons with their physical counterparts, with a correct answer rate of 79.2%.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130486715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Session details: Development 会议详情:发展
W. Stuerzlinger
{"title":"Session details: Development","authors":"W. Stuerzlinger","doi":"10.1145/3254708","DOIUrl":"https://doi.org/10.1145/3254708","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130820307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 26th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1