Lung-Pan Cheng, T. Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Róbert Kovács, Johannes Jasper, J. Kemper, Patrick Baudisch
TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck produces the haptic sensation using props, i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. Unlike previous work on prop-based virtual reality, however, TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called "human actuators" by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them. We demonstrate TurkDeck at the example of an immersive 300m2 experience in 25m2 physical space. We show how to simulate a wide range of physical objects and effects, including walls, doors, ledges, steps, beams, switches, stompers, portals, zip lines, and wind. In a user study, participants rated the realism/immersion of TurkDeck higher than a traditional prop-less baseline condition (4.9 vs. 3.6 on 7 item Likert).
{"title":"TurkDeck: Physical Virtual Reality Based on People","authors":"Lung-Pan Cheng, T. Roumen, Hannes Rantzsch, Sven Köhler, Patrick Schmidt, Róbert Kovács, Johannes Jasper, J. Kemper, Patrick Baudisch","doi":"10.1145/2807442.2807463","DOIUrl":"https://doi.org/10.1145/2807442.2807463","url":null,"abstract":"TurkDeck is an immersive virtual reality system that reproduces not only what users see and hear, but also what users feel. TurkDeck produces the haptic sensation using props, i.e., when users touch or manipulate an object in the virtual world, they simultaneously also touch or manipulate a corresponding object in the physical world. Unlike previous work on prop-based virtual reality, however, TurkDeck allows creating arbitrarily large virtual worlds in finite space and using a finite set of physical props. The key idea behind TurkDeck is that it creates these physical representations on the fly by making a group of human workers present and operate the props only when and where the user can actually reach them. TurkDeck manages these so-called \"human actuators\" by displaying visual instructions that tell the human actuators when and where to place props and how to actuate them. We demonstrate TurkDeck at the example of an immersive 300m2 experience in 25m2 physical space. We show how to simulate a wide range of physical objects and effects, including walls, doors, ledges, steps, beams, switches, stompers, portals, zip lines, and wind. In a user study, participants rated the realism/immersion of TurkDeck higher than a traditional prop-less baseline condition (4.9 vs. 3.6 on 7 item Likert).","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130099501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Schoessler, Daniel Windham, Daniel Leithinger, Sean Follmer, H. Ishii
Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display's ability to assemble, disassemble, and reassemble structures from simple passive building blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output.
{"title":"Kinetic Blocks: Actuated Constructive Assembly for Interaction and Display","authors":"Philipp Schoessler, Daniel Windham, Daniel Leithinger, Sean Follmer, H. Ishii","doi":"10.1145/2807442.2807453","DOIUrl":"https://doi.org/10.1145/2807442.2807453","url":null,"abstract":"Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display's ability to assemble, disassemble, and reassemble structures from simple passive building blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128844637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison
We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.
{"title":"3D Printed Hair: Fused Deposition Modeling of Soft Strands, Fibers, and Bristles","authors":"Gierad Laput, Xiang 'Anthony' Chen, Chris Harrison","doi":"10.1145/2807442.2807484","DOIUrl":"https://doi.org/10.1145/2807442.2807484","url":null,"abstract":"We introduce a technique for furbricating 3D printed hair, fibers and bristles, by exploiting the stringing phenomena inherent in 3D printers using fused deposition modeling. Our approach offers a range of design parameters for controlling the properties of single strands and also of hair bundles. We further detail a list of post-processing techniques for refining the behavior and appearance of printed strands. We provide several examples of output, demonstrating the immediate feasibility of our approach using a low cost, commodity printer. Overall, this technique extends the capabilities of 3D printing in a new and interesting way, without requiring any new hardware.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125380743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Voelker, Christian Cherek, Jan Thar, Thorsten Karrer, Christian Thoresen, Kjell Ivar Øvergård, Jan O. Borchers
Tangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and removed it from the surface. We introduce PERCs, persistent capacitive tangibles that "know" whether they are currently on a capacitive touch surface or not. This is achieved by adding a small field sensor to the tangible to detect the touch screen's own, weak electromagnetic touch detection probing signal. Thus, unlike previous designs, PERCs do not get filtered out over time by the adaptive signal filters of the touch screen. We provide a technical overview of the theory be- hind PERCs and our prototype construction, and we evaluate detection rates, timing performance, and positional and angular accuracy for PERCs on a variety of unmodified, commercially available multi-touch devices.Through their affordable circuitry and high accuracy, PERCs open up the potential for a variety of new applications that use tangibles on today's ubiquitous multi-touch devices.
{"title":"PERCs: Persistently Trackable Tangibles on Capacitive Multi-Touch Displays","authors":"Simon Voelker, Christian Cherek, Jan Thar, Thorsten Karrer, Christian Thoresen, Kjell Ivar Øvergård, Jan O. Borchers","doi":"10.1145/2807442.2807466","DOIUrl":"https://doi.org/10.1145/2807442.2807466","url":null,"abstract":"Tangible objects on capacitive multi-touch surfaces are usually only detected while the user is touching them. When the user lets go of such a tangible, the system cannot distinguish whether the user just released the tangible, or picked it up and removed it from the surface. We introduce PERCs, persistent capacitive tangibles that \"know\" whether they are currently on a capacitive touch surface or not. This is achieved by adding a small field sensor to the tangible to detect the touch screen's own, weak electromagnetic touch detection probing signal. Thus, unlike previous designs, PERCs do not get filtered out over time by the adaptive signal filters of the touch screen. We provide a technical overview of the theory be- hind PERCs and our prototype construction, and we evaluate detection rates, timing performance, and positional and angular accuracy for PERCs on a variety of unmodified, commercially available multi-touch devices.Through their affordable circuitry and high accuracy, PERCs open up the potential for a variety of new applications that use tangibles on today's ubiquitous multi-touch devices.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"493 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simon Olberding, S. Ortega, K. Hildebrandt, Jürgen Steimle
Foldios are foldable interactive objects with embedded input sensing and output capabilities. Foldios combine the advantages of folding for thin, lightweight and shape-changing objects with the strengths of thin-film printed electronics for embedded sensing and output. To enable designers and end-users to create highly custom interactive foldable objects, we contribute a new design and fabrication approach. It makes it possible to design the foldable object in a standard 3D environment and to easily add interactive high-level controls, eliminating the need to manually design a fold pattern and low-level circuits for printed electronics. Second, we contribute a set of printable user interface controls for touch input and display output on folded objects. Moreover, we contribute controls for sensing and actuation of shape-changeable objects. We demonstrate the versatility of the approach with a variety of interactive objects that have been fabricated with this framework.
{"title":"Foldio: Digital Fabrication of Interactive and Shape-Changing Objects With Foldable Printed Electronics","authors":"Simon Olberding, S. Ortega, K. Hildebrandt, Jürgen Steimle","doi":"10.1145/2807442.2807494","DOIUrl":"https://doi.org/10.1145/2807442.2807494","url":null,"abstract":"Foldios are foldable interactive objects with embedded input sensing and output capabilities. Foldios combine the advantages of folding for thin, lightweight and shape-changing objects with the strengths of thin-film printed electronics for embedded sensing and output. To enable designers and end-users to create highly custom interactive foldable objects, we contribute a new design and fabrication approach. It makes it possible to design the foldable object in a standard 3D environment and to easily add interactive high-level controls, eliminating the need to manually design a fold pattern and low-level circuits for printed electronics. Second, we contribute a set of printable user interface controls for touch input and display output on folded objects. Moreover, we contribute controls for sensing and actuation of shape-changeable objects. We demonstrate the versatility of the approach with a variety of interactive objects that have been fabricated with this framework.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"60 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120915103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chairs, wearables, and handhelds have become popular sites for spatial tactile display. Visual animators, already expert in using time and space to portray motion, could readily transfer their skills to produce rich haptic sensations if given the right tools. We introduce the tactile animation object, a directly manipulated phantom tactile sensation. This abstraction has two key benefits: 1) efficient, creative, iterative control of spatiotemporal sensations, and 2) the potential to support a variety of tactile grids, including sparse displays. We present Mango, an editing tool for animators, including its rendering pipeline and perceptually-optimized interpolation algorithm for sparse vibrotactile grids. In our evaluation, professional animators found it easy to create a variety of vibrotactile patterns, with both experts and novices preferring the tactile animation object over controlling actuators individually.
{"title":"Tactile Animation by Direct Manipulation of Grid Displays","authors":"Oliver S. Schneider, A. Israr, Karon E Maclean","doi":"10.1145/2807442.2807470","DOIUrl":"https://doi.org/10.1145/2807442.2807470","url":null,"abstract":"Chairs, wearables, and handhelds have become popular sites for spatial tactile display. Visual animators, already expert in using time and space to portray motion, could readily transfer their skills to produce rich haptic sensations if given the right tools. We introduce the tactile animation object, a directly manipulated phantom tactile sensation. This abstraction has two key benefits: 1) efficient, creative, iterative control of spatiotemporal sensations, and 2) the potential to support a variety of tactile grids, including sparse displays. We present Mango, an editing tool for animators, including its rendering pipeline and perceptually-optimized interpolation algorithm for sparse vibrotactile grids. In our evaluation, professional animators found it easy to create a variety of vibrotactile patterns, with both experts and novices preferring the tactile animation object over controlling actuators individually.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132226004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Lafreniere, Parmit K. Chilana, Adam Fourney, Michael A. Terry
The names, icons, and tooltips of commands in feature-rich software are an important source of guidance when locating and selecting amongst commands. Unfortunately, these cues can mislead users into believing that a command is appropriate for a given task, when another command would be more appropriate, resulting in wasted time and frustration. In this paper, we present command disambiguation techniques that inform the user of alternative commands before, during, and after an incorrect command has been executed. To inform the design of these techniques, we define categories of false-feedforward errors caused by misleading interface cues, and identify causes for each. Our techniques are the first designed explicitly to solve this problem in feature-rich software. A user study showed enthusiasm for the techniques, and revealed their potential to play a key role in learning of feature-rich software.
{"title":"These Aren't the Commands You're Looking For: Addressing False Feedforward in Feature-Rich Software","authors":"B. Lafreniere, Parmit K. Chilana, Adam Fourney, Michael A. Terry","doi":"10.1145/2807442.2807482","DOIUrl":"https://doi.org/10.1145/2807442.2807482","url":null,"abstract":"The names, icons, and tooltips of commands in feature-rich software are an important source of guidance when locating and selecting amongst commands. Unfortunately, these cues can mislead users into believing that a command is appropriate for a given task, when another command would be more appropriate, resulting in wasted time and frustration. In this paper, we present command disambiguation techniques that inform the user of alternative commands before, during, and after an incorrect command has been executed. To inform the design of these techniques, we define categories of false-feedforward errors caused by misleading interface cues, and identify causes for each. Our techniques are the first designed explicitly to solve this problem in feature-rich software. A user study showed enthusiasm for the techniques, and revealed their potential to play a key role in learning of feature-rich software.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133855342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Heibeck, B. Tome, Clark Della Silva, H. Ishii
Researchers have been investigating shape-changing interfaces, however technologies for thin, reversible shape change remain complicated to fabricate. uniMorph is an enabling technology for rapid digital fabrication of customized thin-film shape-changing interfaces. By combining the thermoelectric characteristics of copper with the high thermal expansion rate of ultra-high molecular weight polyethylene, we are able to actuate the shape of flexible circuit composites directly. The shape-changing actuation is enabled by a temperature driven mechanism and reduces the complexity of fabrication for thin shape-changing interfaces. In this paper we describe how to design and fabricate thin uniMorph composites. We present composites that are actuated by either environmental temperature changes or active heating of embedded structures and provide a systematic overview of shape-changing primitives. Finally, we present different sensing techniques that leverage the existing copper structures or can be seamlessly embedded into the uniMorph composite. To demonstrate the wide applicability of uniMorph, we present several applications in ubiquitous and mobile computing.
{"title":"uniMorph: Fabricating Thin Film Composites for Shape-Changing Interfaces","authors":"Felix Heibeck, B. Tome, Clark Della Silva, H. Ishii","doi":"10.1145/2807442.2807472","DOIUrl":"https://doi.org/10.1145/2807442.2807472","url":null,"abstract":"Researchers have been investigating shape-changing interfaces, however technologies for thin, reversible shape change remain complicated to fabricate. uniMorph is an enabling technology for rapid digital fabrication of customized thin-film shape-changing interfaces. By combining the thermoelectric characteristics of copper with the high thermal expansion rate of ultra-high molecular weight polyethylene, we are able to actuate the shape of flexible circuit composites directly. The shape-changing actuation is enabled by a temperature driven mechanism and reduces the complexity of fabrication for thin shape-changing interfaces. In this paper we describe how to design and fabricate thin uniMorph composites. We present composites that are actuated by either environmental temperature changes or active heating of embedded structures and provide a systematic overview of shape-changing primitives. Finally, we present different sensing techniques that leverage the existing copper structures or can be seamlessly embedded into the uniMorph composite. To demonstrate the wide applicability of uniMorph, we present several applications in ubiquitous and mobile computing.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130505628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye gaze tracking is a promising input method which is gradually finding its way into the mainstream. An obvious question to arise is whether it can be used for point-and-click tasks, as an alternative for mouse or touch. Pointing with gaze is both fast and natural, although its accuracy is limited. There are still technical challenges with gaze tracking, as well as inherent physiological limitations. Furthermore, providing an alternative to clicking is challenging. We are considering use cases where input based purely on gaze is desired, and the click targets are discrete user interface (UI) elements which are too small to be reliably resolved by gaze alone, e.g., links in hypertext. We present Actigaze, a new gaze-only click alternative which is fast and accurate for this scenario. A clickable user interface element is selected by dwelling on one of a set of confirm buttons, based on two main design contributions: First, the confirm buttons stay on fixed positions with easily distinguishable visual identifiers such as colors, enabling procedural learning of the confirm button position. Secondly, UI elements are associated with confirm buttons through the visual identifiers in a way which minimizes the likelihood of inadvertent clicks. We evaluate two variants of the proposed click alternative, comparing them against the mouse and another gaze-only click alternative.
{"title":"Gaze vs. Mouse: A Fast and Accurate Gaze-Only Click Alternative","authors":"C. Lutteroth, A. Penkar, Gerald Weber","doi":"10.1145/2807442.2807461","DOIUrl":"https://doi.org/10.1145/2807442.2807461","url":null,"abstract":"Eye gaze tracking is a promising input method which is gradually finding its way into the mainstream. An obvious question to arise is whether it can be used for point-and-click tasks, as an alternative for mouse or touch. Pointing with gaze is both fast and natural, although its accuracy is limited. There are still technical challenges with gaze tracking, as well as inherent physiological limitations. Furthermore, providing an alternative to clicking is challenging. We are considering use cases where input based purely on gaze is desired, and the click targets are discrete user interface (UI) elements which are too small to be reliably resolved by gaze alone, e.g., links in hypertext. We present Actigaze, a new gaze-only click alternative which is fast and accurate for this scenario. A clickable user interface element is selected by dwelling on one of a set of confirm buttons, based on two main design contributions: First, the confirm buttons stay on fixed positions with easily distinguishable visual identifiers such as colors, enabling procedural learning of the confirm button position. Secondly, UI elements are associated with confirm buttons through the visual identifiers in a way which minimizes the likelihood of inadvertent clicks. We evaluate two variants of the proposed click alternative, comparing them against the mouse and another gaze-only click alternative.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125234708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to their limited input area, ultra-small devices, such as smartwatches, are even more prone to occlusion or the fat finger problem, than their larger counterparts, such as smart phones, tablets, and tabletop displays. We present NanoStylus -- a finger-mounted fine-tip stylus that enables fast and accurate pointing on a smartwatch with almost no occlusion. The NanoStylus is built from the circuitry of an active capacitive stylus, and mounted within a custom 3D-printed thimble-shaped housing unit. A sensor strip is mounted on each side of the device to enable additional gestures. A user study shows that NanoStylus reduces error rate by 80%, compared to traditional touch interaction and by 45%, compared to a traditional stylus. This high precision pointing capability, coupled with the implemented gesture sensing, gives us the opportunity to explore a rich set of interactive applications on a smartwatch form factor.
{"title":"NanoStylus: Enhancing Input on Ultra-Small Displays with a Finger-Mounted Stylus","authors":"Haijun Xia, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2807442.2807500","DOIUrl":"https://doi.org/10.1145/2807442.2807500","url":null,"abstract":"Due to their limited input area, ultra-small devices, such as smartwatches, are even more prone to occlusion or the fat finger problem, than their larger counterparts, such as smart phones, tablets, and tabletop displays. We present NanoStylus -- a finger-mounted fine-tip stylus that enables fast and accurate pointing on a smartwatch with almost no occlusion. The NanoStylus is built from the circuitry of an active capacitive stylus, and mounted within a custom 3D-printed thimble-shaped housing unit. A sensor strip is mounted on each side of the device to enable additional gestures. A user study shows that NanoStylus reduces error rate by 80%, compared to traditional touch interaction and by 45%, compared to a traditional stylus. This high precision pointing capability, coupled with the implemented gesture sensing, gives us the opportunity to explore a rich set of interactive applications on a smartwatch form factor.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"30 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120815663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}