This paper proposes user-customizable passive control widgets, called MagGetz, which enable tangible interaction on and around mobile devices without requiring power or wireless connections. This is achieved by tracking and ana-lyzing the magnetic field generated by controllers attached on and around the device through a single magnetometer, which is commonly integrated in smartphones today. The proposed method provides users with a broader interaction area, customizable input layouts, richer physical clues, and higher input expressiveness without the need for hardware modifications. We have presented a software toolkit and several applications using MagGetz.
{"title":"MagGetz: customizable passive tangible controllers on and around conventional mobile devices","authors":"Sungjae Hwang, Myungwook Ahn, K. Wohn","doi":"10.1145/2501988.2501991","DOIUrl":"https://doi.org/10.1145/2501988.2501991","url":null,"abstract":"This paper proposes user-customizable passive control widgets, called MagGetz, which enable tangible interaction on and around mobile devices without requiring power or wireless connections. This is achieved by tracking and ana-lyzing the magnetic field generated by controllers attached on and around the device through a single magnetometer, which is commonly integrated in smartphones today. The proposed method provides users with a broader interaction area, customizable input layouts, richer physical clues, and higher input expressiveness without the need for hardware modifications. We have presented a software toolkit and several applications using MagGetz.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133328065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fraser Anderson, Tovi Grossman, Justin Matejka, G. Fitzmaurice
YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.
{"title":"YouMove: enhancing movement training with an augmented reality mirror","authors":"Fraser Anderson, Tovi Grossman, Justin Matejka, G. Fitzmaurice","doi":"10.1145/2501988.2502045","DOIUrl":"https://doi.org/10.1145/2501988.2502045","url":null,"abstract":"YouMove is a novel system that allows users to record and learn physical movement sequences. The recording system is designed to be simple, allowing anyone to create and share training content. The training system uses recorded data to train the user using a large-scale augmented reality mirror. The system trains the user through a series of stages that gradually reduce the user's reliance on guidance and feedback. This paper discusses the design and implementation of YouMove and its interactive mirror. We also present a user study in which YouMove was shown to improve learning and short-term retention by a factor of 2 compared to a traditional video demonstration.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131286893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve the accuracy of target selection for finger touch, we conceptualize finger touch input as an uncertain process, and derive a statistical target selection criterion, Bayesian Touch Criterion, by combining the basic Bayes' rule of probability with the generalized dual Gaussian distribution hypothesis of finger touch. The Bayesian Touch Criterion selects the intended target as the candidate with the shortest Bayesian Touch Distance to the touch point, which is computed from the touch point to the target center distance and the target size. We give the derivation of the Bayesian Touch Criterion and its empirical evaluation with two experiments. The results showed that for 2-dimensional circular target selection, the Bayesian Touch Criterion is significantly more accurate than the commonly used Visual Boundary Criterion (i.e., a target is selected if and only if the touch point falls within its boundary) and its two variants.
{"title":"Bayesian touch: a statistical criterion of target selection with finger touch","authors":"Xiaojun Bi, Shumin Zhai","doi":"10.1145/2501988.2502058","DOIUrl":"https://doi.org/10.1145/2501988.2502058","url":null,"abstract":"To improve the accuracy of target selection for finger touch, we conceptualize finger touch input as an uncertain process, and derive a statistical target selection criterion, Bayesian Touch Criterion, by combining the basic Bayes' rule of probability with the generalized dual Gaussian distribution hypothesis of finger touch. The Bayesian Touch Criterion selects the intended target as the candidate with the shortest Bayesian Touch Distance to the touch point, which is computed from the touch point to the target center distance and the target size. We give the derivation of the Bayesian Touch Criterion and its empirical evaluation with two experiments. The results showed that for 2-dimensional circular target selection, the Bayesian Touch Criterion is significantly more accurate than the commonly used Visual Boundary Criterion (i.e., a target is selected if and only if the touch point falls within its boundary) and its two variants.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116383678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Sensing","authors":"Chris Harrison","doi":"10.1145/3254703","DOIUrl":"https://doi.org/10.1145/3254703","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114664725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuel Iarussi, A. Bousseau, Theophanis Tsandilas
We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to %support this task and help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw.
{"title":"The drawing assistant: automated drawing guidance and feedback from photographs","authors":"Emmanuel Iarussi, A. Bousseau, Theophanis Tsandilas","doi":"10.1145/2501988.2501997","DOIUrl":"https://doi.org/10.1145/2501988.2501997","url":null,"abstract":"We present an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to %support this task and help people gain consciousness of the shapes in a scene and their relationships. We compile these techniques and derive a set of construction lines that we automatically extract from a model photograph. We then display these lines over the model to guide its manual reproduction by the user on the drawing canvas. Finally, we use shape-matching to register the user's sketch with the model guides. We use this registration to provide corrective feedback to the user. Our user studies show that automatically extracted construction lines can help users draw more accurately. Furthermore, users report that guidance and corrective feedback help them better understand how to draw.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125072363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liwei Chan, Rong-Hao Liang, M. Tsai, K. Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng, Bing-Yu Chen
We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.
{"title":"FingerPad: private and subtle interaction using fingertips","authors":"Liwei Chan, Rong-Hao Liang, M. Tsai, K. Cheng, Chao-Huai Su, Mike Y. Chen, Wen-Huang Cheng, Bing-Yu Chen","doi":"10.1145/2501988.2502016","DOIUrl":"https://doi.org/10.1145/2501988.2502016","url":null,"abstract":"We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125892710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke-Yu Chen, Kent Lyons, Sean White, Shwetak N. Patel
While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.
虽然近年来在可穿戴计算方面取得了很大进展,但输入技术仍然是一个关键的挑战。在本文中,我们介绍了uTrack,一种利用磁场(MF)传感将拇指和手指转换成3D输入系统的技术。使用者在手指背面佩戴一对磁力计,在拇指背面贴上永磁体。通过在手指间移动拇指,我们获得了一个连续的输入流,可以用于3D指向。具体来说,我们的新算法直接从传感器读数计算磁铁的3D位置和倾斜角度。我们对uTrack作为输入设备进行了评估,显示出在3D空间中平均跟踪精度为4.84 mm -足以进行微妙的交互。我们还演示了一个实时原型和示例应用程序,允许用户使用3D手指输入与计算机交互。
{"title":"uTrack: 3D input using two magnetic sensors","authors":"Ke-Yu Chen, Kent Lyons, Sean White, Shwetak N. Patel","doi":"10.1145/2501988.2502035","DOIUrl":"https://doi.org/10.1145/2501988.2502035","url":null,"abstract":"While much progress has been made in wearable computing in recent years, input techniques remain a key challenge. In this paper, we introduce uTrack, a technique to convert the thumb and fingers into a 3D input system using magnetic field (MF) sensing. A user wears a pair of magnetometers on the back of their fingers and a permanent magnet affixed to the back of the thumb. By moving the thumb across the fingers, we obtain a continuous input stream that can be used for 3D pointing. Specifically, our novel algorithm calculates the magnet's 3D position and tilt angle directly from the sensor readings. We evaluated uTrack as an input device, showing an average tracking accuracy of 4.84 mm in 3D space - sufficient for subtle interaction. We also demonstrate a real-time prototype and example applications allowing users to interact with the computer using 3D finger input.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115292389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Xiao, Chris Harrison, Karl D. D. Willis, I. Poupyrev, S. Hudson
We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.
{"title":"Lumitrack: low cost, high precision, high speed tracking with projected m-sequences","authors":"R. Xiao, Chris Harrison, Karl D. D. Willis, I. Poupyrev, S. Hudson","doi":"10.1145/2501988.2502022","DOIUrl":"https://doi.org/10.1145/2501988.2502022","url":null,"abstract":"We present Lumitrack, a novel motion tracking technology that uses projected structured patterns and linear optical sensors. Each sensor unit is capable of recovering 2D location within the projection area, while multiple sensors can be combined for up to six degree of freedom (DOF) tracking. Our structured light approach is based on special patterns, called m-sequences, in which any consecutive sub-sequence of m bits is unique. Lumitrack can utilize both digital and static projectors, as well as scalable embedded sensing configurations. The resulting system enables high-speed, high precision, and low-cost motion tracking for a wide range of interactive applications. We detail the hardware, operation, and performance characteristics of our approach, as well as a series of example applications that highlight its immediate feasibility and utility.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130133630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a haptic feedback method for a virtual button based on the force-displacement curves of a physical button. The original feature of the proposed method is that it provides haptic feedback, not only for the "click" sensation but also for the moving sensation before and after transition points in a force-displacement curve. The haptic feedback is by vibrotactile stimulations only and does not require a force feedback mechanism. We conducted user experiments to show that the resultant haptic feedback is realistic and distinctive. Participants were able to distinguish among six different virtual buttons, with 94.1% accuracy even in a noisy environment. In addition, participants were able to associate four virtual buttons with their physical counterparts, with a correct answer rate of 79.2%.
{"title":"Haptic feedback design for a virtual button along force-displacement curves","authors":"Sunjun Kim, Geehyuk Lee","doi":"10.1145/2501988.2502041","DOIUrl":"https://doi.org/10.1145/2501988.2502041","url":null,"abstract":"In this paper, we present a haptic feedback method for a virtual button based on the force-displacement curves of a physical button. The original feature of the proposed method is that it provides haptic feedback, not only for the \"click\" sensation but also for the moving sensation before and after transition points in a force-displacement curve. The haptic feedback is by vibrotactile stimulations only and does not require a force feedback mechanism. We conducted user experiments to show that the resultant haptic feedback is realistic and distinctive. Participants were able to distinguish among six different virtual buttons, with 94.1% accuracy even in a noisy environment. In addition, participants were able to associate four virtual buttons with their physical counterparts, with a correct answer rate of 79.2%.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130486715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Development","authors":"W. Stuerzlinger","doi":"10.1145/3254708","DOIUrl":"https://doi.org/10.1145/3254708","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130820307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}