We propose a new system to enable the real painting experience on digital platform and extend it to multi-strokes for different painting needs. In this paper, we describe how the FlexStroke is used as Chinese brush, oil brush and crayon with changes of its jamming tip. This tip has different levels of stiffness based on its jamming structure. Visual simulations on PixelSense jointly enhance the intuitive painting process with highly realistic display results.
{"title":"FlexStroke: a jamming brush tip simulating multiple painting tools on digital platform","authors":"Xin Liu, Haijun Xia, J. Gu","doi":"10.1145/2508468.2514935","DOIUrl":"https://doi.org/10.1145/2508468.2514935","url":null,"abstract":"We propose a new system to enable the real painting experience on digital platform and extend it to multi-strokes for different painting needs. In this paper, we describe how the FlexStroke is used as Chinese brush, oil brush and crayon with changes of its jamming tip. This tip has different levels of stiffness based on its jamming structure. Visual simulations on PixelSense jointly enhance the intuitive painting process with highly realistic display results.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125921622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multi-touch operations are sometimes difficult to perform due to musculoskeletal constraints. We propose FingerSkate, a variation to the current multi-touch operations to make them less constrained and more continuous. With FingerSkate, once one starts a multi-touch operation, one can continue the operation without having to maintain both fingers on the screen. In a pilot study, we observe that participants could learn to FingerSkate easily and were utilizing the new technique actively.
{"title":"FingerSkate: making multi-touch operations less constrained and more continuous","authors":"Jeongmin Son, Geehyuk Lee","doi":"10.1145/2508468.2514733","DOIUrl":"https://doi.org/10.1145/2508468.2514733","url":null,"abstract":"Multi-touch operations are sometimes difficult to perform due to musculoskeletal constraints. We propose FingerSkate, a variation to the current multi-touch operations to make them less constrained and more continuous. With FingerSkate, once one starts a multi-touch operation, one can continue the operation without having to maintain both fingers on the screen. In a pilot study, we observe that participants could learn to FingerSkate easily and were utilizing the new technique actively.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130319041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hugo Nicolau, Kyle Montague, J. Guerreiro, Diogo Marques, Tiago Guerreiro, Craig D. Stewart, Vicki L. Hanson
Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.
{"title":"Augmenting braille input through multitouch feedback","authors":"Hugo Nicolau, Kyle Montague, J. Guerreiro, Diogo Marques, Tiago Guerreiro, Craig D. Stewart, Vicki L. Hanson","doi":"10.1145/2508468.2514720","DOIUrl":"https://doi.org/10.1145/2508468.2514720","url":null,"abstract":"Current touch interfaces lack the rich tactile feedback that allows blind users to detect and correct errors. This is especially relevant for multitouch interactions, such as Braille input. We propose HoliBraille, a system that combines touch input and multi-point vibrotactile output on mobile devices. We believe this technology can offer several benefits to blind users; namely, convey feedback for complex multitouch gestures, improve input performance, and support inconspicuous interactions. In this paper, we present the design of our unique prototype, which allows users to receive multitouch localized vibrotactile feedback. Preliminary results on perceptual discrimination show an average of 100% and 82% accuracy for single-point and chord discrimination, respectively. Finally, we discuss a text-entry application with rich tactile feedback.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126273067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
My dissertation proposes a vision in which anybody can modify any interface of any application. Realizing this vision is difficult because of the rigidity and fragmentation of current interfaces. Specifically, rigidity makes it difficult or impossible for a designer to modify or customize existing interfaces. Fragmentation results from the fact that people generally use many different applications built with a variety of toolkits. Each is implemented differently, so it is difficult to consistently add new functionality. As a result, researchers are often limited to demonstrating new ideas in small testbeds, and practitioners often find it difficult to adopt and deploy ideas from the literature. In my dissertation, I propose transcending the rigidity and fragmentation of modern interfaces by building upon their single largest commonality: that they ultimately consist of pixels painted to a display. Building from this universal representation, I propose pixel-based interpretation to enable modification of interfaces without their source code and independent of their underlying toolkit implementation.
{"title":"Pixel-based reverse engineering of graphical interfaces","authors":"M. Dixon","doi":"10.1145/2508468.2508469","DOIUrl":"https://doi.org/10.1145/2508468.2508469","url":null,"abstract":"My dissertation proposes a vision in which anybody can modify any interface of any application. Realizing this vision is difficult because of the rigidity and fragmentation of current interfaces. Specifically, rigidity makes it difficult or impossible for a designer to modify or customize existing interfaces. Fragmentation results from the fact that people generally use many different applications built with a variety of toolkits. Each is implemented differently, so it is difficult to consistently add new functionality. As a result, researchers are often limited to demonstrating new ideas in small testbeds, and practitioners often find it difficult to adopt and deploy ideas from the literature. In my dissertation, I propose transcending the rigidity and fragmentation of modern interfaces by building upon their single largest commonality: that they ultimately consist of pixels painted to a display. Building from this universal representation, I propose pixel-based interpretation to enable modification of interfaces without their source code and independent of their underlying toolkit implementation.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An organic user interface (OUI) is a kind of interface that is based on natural human-human and human-physical object interaction models. In such situations, hair and fur play important roles in establishing smooth and natural communication. Animals and birds use their hair, fur and feathers to express their emotions, and groom each other when forming closer relationships. Therefore, hair and fur are potential materials for development of the ideal OUI. In this research, we propose the hairlytop interface, which is a collection of hair-like units composed of shape memory alloys, for use as an OUI. The proposed interface is capable of improving its spatial resolution and can be used to develop a hair surface on any electrical device shape.
{"title":"An assembly of soft actuators for an organic user interface","authors":"Yoshiharu Ooide, Hiroki Kawaguchi, T. Nojima","doi":"10.1145/2508468.2514723","DOIUrl":"https://doi.org/10.1145/2508468.2514723","url":null,"abstract":"An organic user interface (OUI) is a kind of interface that is based on natural human-human and human-physical object interaction models. In such situations, hair and fur play important roles in establishing smooth and natural communication. Animals and birds use their hair, fur and feathers to express their emotions, and groom each other when forming closer relationships. Therefore, hair and fur are potential materials for development of the ideal OUI. In this research, we propose the hairlytop interface, which is a collection of hair-like units composed of shape memory alloys, for use as an OUI. The proposed interface is capable of improving its spatial resolution and can be used to develop a hair surface on any electrical device shape.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130583164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As computers become more pervasive, more programs deal with real-world input and output (real-world I/O) such as processing camera images and controlling robots. The real-world I/O usually contains complex data hardly represented by text or symbols, while most of the current integrated development environments (IDEs) are equipped with text-based editors and debuggers. My thesis investigates how visual representations of the real world can be integrated within the text-based development environment to enhance the programming experience. In particular, we have designed and implemented IDEs for three scenarios, all of which make use of photos and videos representing the real world. Based on these experiences, we discuss "programming with example data," a technique where the programmer demonstrates examples to the IDE and writes text-based code with support of the examples.
{"title":"Integrated visual representations for programming with real-world input and output","authors":"Jun Kato","doi":"10.1145/2508468.2508476","DOIUrl":"https://doi.org/10.1145/2508468.2508476","url":null,"abstract":"As computers become more pervasive, more programs deal with real-world input and output (real-world I/O) such as processing camera images and controlling robots. The real-world I/O usually contains complex data hardly represented by text or symbols, while most of the current integrated development environments (IDEs) are equipped with text-based editors and debuggers. My thesis investigates how visual representations of the real world can be integrated within the text-based development environment to enhance the programming experience. In particular, we have designed and implemented IDEs for three scenarios, all of which make use of photos and videos representing the real world. Based on these experiences, we discuss \"programming with example data,\" a technique where the programmer demonstrates examples to the IDE and writes text-based code with support of the examples.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129193923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A simple technique is proposed that uses a single photoreflector to recognize multi-touch gestures. Touch and multi-finger swipe are robustly discriminated and recognized. Further, swipe direction can be detected by adding a gradient to the sensitivity.
{"title":"Multi-touch gesture recognition by single photoreflector","authors":"H. Manabe","doi":"10.1145/2508468.2514933","DOIUrl":"https://doi.org/10.1145/2508468.2514933","url":null,"abstract":"A simple technique is proposed that uses a single photoreflector to recognize multi-touch gestures. Touch and multi-finger swipe are robustly discriminated and recognized. Further, swipe direction can be detected by adding a gradient to the sensitivity.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121479468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Glassified, a modified ruler with a transparent display to supplement physical strokes made on paper with virtual graphics. Because the display is transparent, both the physical strokes and the virtual graphics are visible in the same plane. A digitizer captures the pen strokes in order to update the graphical overlay, fusing the traditional function of a ruler with the added advantages of a digital, display-based system. We describe use-cases of Glassified in the areas of math and physics and discuss its advantages over traditional systems.
{"title":"Glassified: an augmented ruler based on a transparent display for real-time interactions with paper","authors":"Anirudh Sharma, Lirong Liu, P. Maes","doi":"10.1145/2508468.2514937","DOIUrl":"https://doi.org/10.1145/2508468.2514937","url":null,"abstract":"We introduce Glassified, a modified ruler with a transparent display to supplement physical strokes made on paper with virtual graphics. Because the display is transparent, both the physical strokes and the virtual graphics are visible in the same plane. A digitizer captures the pen strokes in order to update the graphical overlay, fusing the traditional function of a ruler with the added advantages of a digital, display-based system. We describe use-cases of Glassified in the areas of math and physics and discuss its advantages over traditional systems.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116113215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Describing device behavior is a common task that is currently not well supported by general animation or CAD software. We present PhysInk, a system that enables users to demonstrate 2D behavior by sketching and directly manipulating objects on a physics-enabled stage. Unlike previous tools that simply capture the user's animation, PhysInk captures an understanding of the behavior in a timeline. This enables useful capabilities such as causality-aware editing and finding physically-correct equivalent behavior. We envision PhysInk being used as a physics teacher's sketchpad or a WYSIWYG tool for game designers.
{"title":"Physink: sketching physical behavior","authors":"J. Scott, Randall Davis","doi":"10.1145/2508468.2514930","DOIUrl":"https://doi.org/10.1145/2508468.2514930","url":null,"abstract":"Describing device behavior is a common task that is currently not well supported by general animation or CAD software. We present PhysInk, a system that enables users to demonstrate 2D behavior by sketching and directly manipulating objects on a physics-enabled stage. Unlike previous tools that simply capture the user's animation, PhysInk captures an understanding of the behavior in a timeline. This enables useful capabilities such as causality-aware editing and finding physically-correct equivalent behavior. We envision PhysInk being used as a physics teacher's sketchpad or a WYSIWYG tool for game designers.","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131477765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Khademi, Mingming Fan, Hossein Mousavi Hondori, C. Lopes
We propose a novel multi-perspective multi-layer interaction using a mobile device, which provides an immersive experience of 3D navigation through an object. The mobile device serves as a window, through which the user can observe the object in detail from various perspectives by orienting the device differently. Various layers of the object can also be shown while users move the device away and toward themselves. Our approach is real-time, completely mobile (running on Android) and does not depend on external sensor/displays (e.g., camera and projector).
{"title":"Multi-perspective multi-layer interaction on mobile device","authors":"M. Khademi, Mingming Fan, Hossein Mousavi Hondori, C. Lopes","doi":"10.1145/2508468.2514712","DOIUrl":"https://doi.org/10.1145/2508468.2514712","url":null,"abstract":"We propose a novel multi-perspective multi-layer interaction using a mobile device, which provides an immersive experience of 3D navigation through an object. The mobile device serves as a window, through which the user can observe the object in detail from various perspectives by orienting the device differently. Various layers of the object can also be shown while users move the device away and toward themselves. Our approach is real-time, completely mobile (running on Android) and does not depend on external sensor/displays (e.g., camera and projector).","PeriodicalId":196872,"journal":{"name":"Adjunct Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121878280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}