Valkyrie Savage, Sean Follmer, Jingyi Li, Bjoern Hartmann
To fabricate functional objects, designers create assemblies combining existing parts (e.g., mechanical hinges, electronic components) with custom-designed geometry (e.g., enclosures). Modeling complex assemblies is outside the reach of the growing number of novice ``makers' with access to digital fabrication tools. We aim to allow makers to design and 3D print functional mechanical and electronic assemblies. Based on a formative exploration, we created Makers' Marks, a system based on physically authoring assemblies with sculpting materials and annotation stickers. Makers physically sculpt the shape of an object and attach stickers to place existing parts or high-level features (such as parting lines). Our tool extracts the 3D pose of these annotations from a scan of the design, then synthesizes the geometry needed to support integrating desired parts using a library of clearance and mounting constraints. The resulting designs can then be easily 3D printed and assembled. Our approach enables easy creation of complex objects such as TUIs, and leverages physical materials for tangible manipulation and understanding scale. We validate our tool through several design examples: a custom game controller, an animated toy figure, a friendly baby monitor, and a hinged box with integrated alarm.
{"title":"Makers' Marks: Physical Markup for Designing and Fabricating Functional Objects","authors":"Valkyrie Savage, Sean Follmer, Jingyi Li, Bjoern Hartmann","doi":"10.1145/2807442.2807508","DOIUrl":"https://doi.org/10.1145/2807442.2807508","url":null,"abstract":"To fabricate functional objects, designers create assemblies combining existing parts (e.g., mechanical hinges, electronic components) with custom-designed geometry (e.g., enclosures). Modeling complex assemblies is outside the reach of the growing number of novice ``makers' with access to digital fabrication tools. We aim to allow makers to design and 3D print functional mechanical and electronic assemblies. Based on a formative exploration, we created Makers' Marks, a system based on physically authoring assemblies with sculpting materials and annotation stickers. Makers physically sculpt the shape of an object and attach stickers to place existing parts or high-level features (such as parting lines). Our tool extracts the 3D pose of these annotations from a scan of the design, then synthesizes the geometry needed to support integrating desired parts using a library of clearance and mounting constraints. The resulting designs can then be easily 3D printed and assembled. Our approach enables easy creation of complex objects such as TUIs, and leverages physical materials for tangible manipulation and understanding scale. We validate our tool through several design examples: a custom game controller, an animated toy figure, a friendly baby monitor, and a hinged box with integrated alarm.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116208041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathieu Le Goc, Pierre Dragicevic, Samuel Huron, Jeremy Boy, Jean-Daniel Fekete
SmartTokens are small-sized tangible tokens that can sense multiple types of motion, multiple types of touch/grip, and send input events wirelessly as state-machine transitions. By providing an open platform for embedding basic sensing capabilities within small form-factors, SmartTokens extend the design space of tangible user interfaces. We describe the design and implementation of SmartTokens and illustrate how they can be used in practice by introducing a novel TUI design for event notification and personal task management.
{"title":"SmartTokens: Embedding Motion and Grip Sensing in Small Tangible Objects","authors":"Mathieu Le Goc, Pierre Dragicevic, Samuel Huron, Jeremy Boy, Jean-Daniel Fekete","doi":"10.1145/2807442.2807488","DOIUrl":"https://doi.org/10.1145/2807442.2807488","url":null,"abstract":"SmartTokens are small-sized tangible tokens that can sense multiple types of motion, multiple types of touch/grip, and send input events wirelessly as state-machine transitions. By providing an open platform for embedding basic sensing capabilities within small form-factors, SmartTokens extend the design space of tangible user interfaces. We describe the design and implementation of SmartTokens and illustrate how they can be used in practice by introducing a novel TUI design for event notification and personal task management.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132728985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.
{"title":"Impacto: Simulating Physical Impact by Combining Tactile Stimulation with Electrical Muscle Stimulation","authors":"Pedro Lopes, A. Ion, Patrick Baudisch","doi":"10.1145/2807442.2807443","DOIUrl":"https://doi.org/10.1145/2807442.2807443","url":null,"abstract":"We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132752869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gierad Laput, Chouchang Yang, R. Xiao, A. Sample, Chris Harrison
Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. By modifying a small, low-cost, software-defined radio, we can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment; our sensor is self-contained and can be worn unobtrusively on the body. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment.
{"title":"EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects","authors":"Gierad Laput, Chouchang Yang, R. Xiao, A. Sample, Chris Harrison","doi":"10.1145/2807442.2807481","DOIUrl":"https://doi.org/10.1145/2807442.2807481","url":null,"abstract":"Most everyday electrical and electromechanical objects emit small amounts of electromagnetic (EM) noise during regular operation. When a user makes physical contact with such an object, this EM signal propagates through the user, owing to the conductivity of the human body. By modifying a small, low-cost, software-defined radio, we can detect and classify these signals in real-time, enabling robust on-touch object detection. Unlike prior work, our approach requires no instrumentation of objects or the environment; our sensor is self-contained and can be worn unobtrusively on the body. We call our technique EM-Sense and built a proof-of-concept smartwatch implementation. Our studies show that discrimination between dozens of objects is feasible, independent of wearer, time and local environment.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116424101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Tompkin, Samuel Muff, J. McCann, H. Pfister, J. Kautz, M. Alexa, W. Matusik
Light field displays allow viewers to see view-dependent 3D content as if looking through a window; however, existing work on light field display interaction is limited. Yet, they have the potential to parallel 2D pen and touch screen systems, which present a joint input and display surface for natural interaction. We propose a 4D display and interaction space using a dual-purpose lenslet array, which combines light field display and light field pen sensing, and allows us to estimate the 3D position and 2D orientation of the pen. This method is simple, fast (150Hz), with position accuracy of 2-3mm and precision of 0.2-0.6mm from 0-350mm away from the lenslet array, and orientation accuracy of 2 degrees and precision of 0.2-0.3 degrees within a 45 degree field of view. Further, we 3D print the lenslet array with embedded baffles to reduce out-of-bounds cross-talk, and use an optical relay to allow interaction behind the focal plane. We demonstrate our joint display/sensing system with interactive light field painting.
{"title":"Joint 5D Pen Input for Light Field Displays","authors":"J. Tompkin, Samuel Muff, J. McCann, H. Pfister, J. Kautz, M. Alexa, W. Matusik","doi":"10.1145/2807442.2807477","DOIUrl":"https://doi.org/10.1145/2807442.2807477","url":null,"abstract":"Light field displays allow viewers to see view-dependent 3D content as if looking through a window; however, existing work on light field display interaction is limited. Yet, they have the potential to parallel 2D pen and touch screen systems, which present a joint input and display surface for natural interaction. We propose a 4D display and interaction space using a dual-purpose lenslet array, which combines light field display and light field pen sensing, and allows us to estimate the 3D position and 2D orientation of the pen. This method is simple, fast (150Hz), with position accuracy of 2-3mm and precision of 0.2-0.6mm from 0-350mm away from the lenslet array, and orientation accuracy of 2 degrees and precision of 0.2-0.3 degrees within a 45 degree field of view. Further, we 3D print the lenslet array with embedded baffles to reduce out-of-bounds cross-talk, and use an optical relay to allow interaction behind the focal plane. We demonstrate our joint display/sensing system with interactive light field painting.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122916845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We define, explore, and demonstrate a new multitouch interaction space called "pin-and-cross." It combines one or more static touches ("pins") with another touch to cross a radial target, all performed with one hand. A formative study reveals pin-and-cross kinematic characteristics and evaluates fundamental performance and preference for target angles. These results are used to form design guidelines and recognition heuristics for pin-and-cross menus invoked with one and two pin fingers on first touch or after a drag. These guidelines are used to implement different pin-and-cross techniques. A controlled experiment compares a one finger pin-and-cross contextual menu to a Marking Menu and partial Pie Menu: pin-and-cross is just as accurate and 27% faster when invoked on a draggable object. A photo app demonstrates more pin-and-cross variations for extending two-finger scrolling, selecting modes while drawing, constraining two-finger transformations, and combining pin-and-cross with a Marking Menu.
{"title":"Pin-and-Cross: A Unimanual Multitouch Technique Combining Static Touches with Crossing Selection","authors":"Yuexing Luo, Daniel Vogel","doi":"10.1145/2807442.2807444","DOIUrl":"https://doi.org/10.1145/2807442.2807444","url":null,"abstract":"We define, explore, and demonstrate a new multitouch interaction space called \"pin-and-cross.\" It combines one or more static touches (\"pins\") with another touch to cross a radial target, all performed with one hand. A formative study reveals pin-and-cross kinematic characteristics and evaluates fundamental performance and preference for target angles. These results are used to form design guidelines and recognition heuristics for pin-and-cross menus invoked with one and two pin fingers on first touch or after a drag. These guidelines are used to implement different pin-and-cross techniques. A controlled experiment compares a one finger pin-and-cross contextual menu to a Marking Menu and partial Pie Menu: pin-and-cross is just as accurate and 27% faster when invoked on a draggable object. A photo app demonstrates more pin-and-cross variations for extending two-finger scrolling, selecting modes while drawing, constraining two-finger transformations, and combining pin-and-cross with a Marking Menu.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122549268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Klokmose, James R. Eagan, S. Baader, W. Mackay, M. Beaudouin-Lafon
We revisit Alan Kay's early vision of dynamic media that blurs the distinction between documents and applications. We introduce shareable dynamic media that are malleable by users, who may appropriate them in idiosyncratic ways; shareable among users, who collaborate on multiple aspects of the media; and distributable across diverse devices and platforms. We present Webstrates, an environment for exploring shareable dynamic media. Webstrates augment web technology with real-time sharing. They turn web pages into substrates, i.e. software entities that act as applications or documents depending upon use. We illustrate Webstrates with two implemented case studies: users collaboratively author an article with functionally and visually different editors that they can personalize and extend at run-time; and they orchestrate its presentation and audience participation with multiple devices. We demonstrate the simplicity and generative power of Webstrates with three additional prototypes and evaluate it from a systems perspective.
{"title":"Webstrates: Shareable Dynamic Media","authors":"C. Klokmose, James R. Eagan, S. Baader, W. Mackay, M. Beaudouin-Lafon","doi":"10.1145/2807442.2807446","DOIUrl":"https://doi.org/10.1145/2807442.2807446","url":null,"abstract":"We revisit Alan Kay's early vision of dynamic media that blurs the distinction between documents and applications. We introduce shareable dynamic media that are malleable by users, who may appropriate them in idiosyncratic ways; shareable among users, who collaborate on multiple aspects of the media; and distributable across diverse devices and platforms. We present Webstrates, an environment for exploring shareable dynamic media. Webstrates augment web technology with real-time sharing. They turn web pages into substrates, i.e. software entities that act as applications or documents depending upon use. We illustrate Webstrates with two implemented case studies: users collaboratively author an article with functionally and visually different editors that they can personalize and extend at run-time; and they orchestrate its presentation and audience participation with multiple devices. We demonstrate the simplicity and generative power of Webstrates with three additional prototypes and evaluate it from a systems perspective.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133551285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Perteneder, Martin Bresler, Eva-Maria Grossauer, Joanne Leong, M. Haller
Structuring and rearranging free-hand sketches on large interactive surfaces typically requires making multiple stroke selections. This can be both time-consuming and fatiguing in the absence of well-designed selection tools. Investigating the concept of automated clustering, we conducted a background study which highlighted the fact that people have varying perspectives on how elements in sketches can and should be grouped. In response to these diverse user expectations, we present cLuster, a flexible, domain-independent clustering approach for free-hand sketches. Our approach is designed to accept an initial user selection, which is then used to calculate a linear combination of pre-trained perspectives in real-time. The remaining elements are then clustered. An initial evaluation revealed that in many cases, only a few corrections were necessary to achieve the desired clustering results. Finally, we demonstrate the utility of our approach in a variety of application scenarios.
{"title":"cLuster: Smart Clustering of Free-Hand Sketches on Large Interactive Surfaces","authors":"Florian Perteneder, Martin Bresler, Eva-Maria Grossauer, Joanne Leong, M. Haller","doi":"10.1145/2807442.2807455","DOIUrl":"https://doi.org/10.1145/2807442.2807455","url":null,"abstract":"Structuring and rearranging free-hand sketches on large interactive surfaces typically requires making multiple stroke selections. This can be both time-consuming and fatiguing in the absence of well-designed selection tools. Investigating the concept of automated clustering, we conducted a background study which highlighted the fact that people have varying perspectives on how elements in sketches can and should be grouped. In response to these diverse user expectations, we present cLuster, a flexible, domain-independent clustering approach for free-hand sketches. Our approach is designed to accept an initial user selection, which is then used to calculate a linear combination of pre-trained perspectives in real-time. The remaining elements are then clustered. An initial evaluation revealed that in many cases, only a few corrections were necessary to achieve the desired clustering results. Finally, we demonstrate the utility of our approach in a variety of application scenarios.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134432870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents CyclopsRing, a ring-style fisheye imaging wearable device that can be worn on hand webbings to en- able whole-hand and context-aware interactions. Observing from a central position of the hand through a fisheye perspective, CyclopsRing sees not only the operating hand, but also the environmental contexts that involve with the hand-based interactions. Since CyclopsRing is a finger-worn device, it also allows users to fully preserve skin feedback of the hands. This paper demonstrates a proof-of-concept device, reports the performance in hand-gesture recognition using random decision forest (RDF) method, and, upon the gesture recognizer, presents a set of interaction techniques including on-finger pinch-and-slide input, in-air pinch-and-motion input, palm-writing input, and their interactions with the environ- mental contexts. The experiment obtained an 84.75% recognition rate of hand gesture input from a database of seven hand gestures collected from 15 participants. To our knowledge, CyclopsRing is the first ring-wearable device that supports whole-hand and context-aware interactions.
{"title":"CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring","authors":"Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, Bing-Yu Chen","doi":"10.1145/2807442.2807450","DOIUrl":"https://doi.org/10.1145/2807442.2807450","url":null,"abstract":"This paper presents CyclopsRing, a ring-style fisheye imaging wearable device that can be worn on hand webbings to en- able whole-hand and context-aware interactions. Observing from a central position of the hand through a fisheye perspective, CyclopsRing sees not only the operating hand, but also the environmental contexts that involve with the hand-based interactions. Since CyclopsRing is a finger-worn device, it also allows users to fully preserve skin feedback of the hands. This paper demonstrates a proof-of-concept device, reports the performance in hand-gesture recognition using random decision forest (RDF) method, and, upon the gesture recognizer, presents a set of interaction techniques including on-finger pinch-and-slide input, in-air pinch-and-motion input, palm-writing input, and their interactions with the environ- mental contexts. The experiment obtained an 84.75% recognition rate of hand gesture input from a database of seven hand gestures collected from 15 participants. To our knowledge, CyclopsRing is the first ring-wearable device that supports whole-hand and context-aware interactions.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134645160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current virtual keyboards are known to be slower and less convenient than physical QWERTY keyboards because they simply imitate the traditional QWERTY keyboards on touchscreens. In order to improve virtual keyboards, we consider two reasonable assumptions based on the observation of skilled typists. First, the keys are already assigned to each finger for typing. Based on this assumption, we suggest restricting each finger to entering pre-allocated keys only. Second, non-touching fingers move in correlation with the touching finger because of the intrinsic structure of human hands. To verify of our assumptions, we conducted two experiments with skilled typists. In the first experiment, we statistically verified the second assumption. We then suggest a novel virtual keyboard using our observations. In the second experiment, we show that our suggested keyboard outperforms existing virtual keyboards.
{"title":"Improving Virtual Keyboards When All Finger Positions Are Known","authors":"Daewoong Choi, Hyeonjoong Cho, J. Cheong","doi":"10.1145/2807442.2807491","DOIUrl":"https://doi.org/10.1145/2807442.2807491","url":null,"abstract":"Current virtual keyboards are known to be slower and less convenient than physical QWERTY keyboards because they simply imitate the traditional QWERTY keyboards on touchscreens. In order to improve virtual keyboards, we consider two reasonable assumptions based on the observation of skilled typists. First, the keys are already assigned to each finger for typing. Based on this assumption, we suggest restricting each finger to entering pre-allocated keys only. Second, non-touching fingers move in correlation with the touching finger because of the intrinsic structure of human hands. To verify of our assumptions, we conducted two experiments with skilled typists. In the first experiment, we statistically verified the second assumption. We then suggest a novel virtual keyboard using our observations. In the second experiment, we show that our suggested keyboard outperforms existing virtual keyboards.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128649634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}