In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.
{"title":"Wearable Kinesthetic I/O Device for Sharing Muscle Compliance","authors":"Jun Nishida, Kenji Suzuki","doi":"10.1145/3266037.3266100","DOIUrl":"https://doi.org/10.1145/3266037.3266100","url":null,"abstract":"In this paper, we present a wearable kinesthetic I/O device, which is able to measure and intervene in multiple muscle activities simultaneously through the same electrodes. The developed system includes an I/O module, capable of measuring the electromyogram (EMG) of four muscle tissues, while applying electrical muscle stimulation (EMS) at the same time. The developed wearable system is configured in a scalable manner for achieving 1) high stimulus frequency (up to 70 Hz), 2) wearable dimensions in which the device can be placed along the limbs, and 3) flexibility of the number of I/O electrodes (up to 32 channels). In a pilot user study, which shared the wrist compliance between two persons, participants were able to recognize the level of their confederate's wrist joint compliance using a 4-point Likert scale. The developed system would benefit a physical therapist and a patient, during hand rehabilitation, using a peg board for sharing their wrist compliance and grip force, which are usually difficult to be observed in a visual contact.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124977867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.
{"title":"An Interactive Pipeline for Creating Visual Blends","authors":"Lydia B. Chilton, S. Petridis, Maneesh Agrawala","doi":"10.1145/3266037.3271646","DOIUrl":"https://doi.org/10.1145/3266037.3271646","url":null,"abstract":"Visual blends are an advanced graphic design technique to draw users' attention to a message. They blend together two objects in a way that is novel and useful in conveying a message symbolically. This demo presents an interactive pipeline for creating visual blends that follows the iterative design process. Our pipeline decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration. Our demo allows individual users to see how existing visual blends were made, edit or improve existing visual blends, and create new visual blends.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz
Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.
{"title":"Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics","authors":"Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz","doi":"10.1145/3266037.3271630","DOIUrl":"https://doi.org/10.1145/3266037.3271630","url":null,"abstract":"Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116952470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro F. Campos, Diogo Cabral, Frederica Gonçalves
User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.
{"title":"Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming","authors":"Pedro F. Campos, Diogo Cabral, Frederica Gonçalves","doi":"10.1145/3266037.3266105","DOIUrl":"https://doi.org/10.1145/3266037.3266105","url":null,"abstract":"User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.
{"title":"Enabling Single-Handed Interaction in Mobile and Wearable Computing","authors":"H. Yeo","doi":"10.1145/3266037.3266129","DOIUrl":"https://doi.org/10.1145/3266037.3266129","url":null,"abstract":"Mobile and wearable computing are increasingly pervasive as people carry and use personal devices in everyday life. Screen sizes of such devices are becoming larger and smaller to accommodate both intimate and practical uses. Some mobile device screens are becoming larger to accommodate new experiences (e.g., phablet, tablet, eReader), whereas screen sizes on wearable devices are becoming smaller to allow them to fit into more places (e.g., smartwatch, wrist-band and eye-wear). However, these trends are making it difficult to use such devices with only one hand due to their placement, limited thumb reach and the fat-finger problem. This is especially true as there are many occasions when a user's other hand is occupied (encumbered) or not available. This thesis work explores, creates and studies novel interaction techniques that enable effective single-hand usage on mobile and wearable devices, empowering users to achieve more with their smart devices when only one hand is available.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124728062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.
{"title":"A WOZ Study of Feedforward Information on an Ambient Display in Autonomous Cars","authors":"Hauke Sandhaus, E. Hornecker","doi":"10.1145/3266037.3266111","DOIUrl":"https://doi.org/10.1145/3266037.3266111","url":null,"abstract":"We describe the development and user testing of an ambient display for autonomous vehicles. Instead of providing feedback about driving actions, once executed, it communicates driving decisions in advance, via light signals in passengers\" peripheral vision. This ambient display was tested in an WoZ-based on-the-road-driving simulation of a fully autonomous vehicle. Findings from a preliminary study with 14 participants suggest that such a display might be particularly useful to communicate upcoming inertia changes for passengers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132966183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.
{"title":"EyeExpress: Expanding Hands-free Input Vocabulary using Eye Expressions","authors":"Pin-Sung Ku, Te-Yen Wu, Mike Y. Chen","doi":"10.1145/3266037.3266123","DOIUrl":"https://doi.org/10.1145/3266037.3266123","url":null,"abstract":"The muscles surrounding the human eye are capable of performing a wide range of expressions such as squinting, blinking, frowning, and raising eyebrows. This work explores the use of these ocular expressions to expand the input vocabularies of hands-free interactions. We conducted a series of user studies: 1) to understand which eye expressions users could consistently perform among all possible expressions, 2) to explore how these expressions can be used for hands-free interactions through a user-defined design process. Our study results showed that most participants could consistently perform 9 of the 18 possible eye expressions. Also, in the user define study the participants used the eye expressions to create hands-free interactions for the state-of-the-art augmented reality (AR) head-mounted displays.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122124068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto
With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed "Haptopus," which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.
{"title":"Haptopus: Haptic VR Experience Using Suction Mechanism Embedded in Head-mounted Display","authors":"Takayuki Kameoka, Yuki Kon, Takuto Nakamura, H. Kajimoto","doi":"10.1145/3266037.3271634","DOIUrl":"https://doi.org/10.1145/3266037.3271634","url":null,"abstract":"With the spread of VR experiences using HMD, many proposals have been made to improve the experiences by providing tactile information to the fingertips. However, there are problems, such as difficulty attaching and detaching the devices and hindrances to free finger movement. To solve these issues, we developed \"Haptopus,\" which embeds a tactile display in the HMD and presents tactile sensations to the face. In this paper, we conducted a preliminary investigation on the best suction pressure and compared Haptopus to conventional tactile presentation approaches. As a result, we confirmed that Haptopus improves the quality of the VR experience.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130130439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.
{"title":"Mixed-Reality for Object-Focused Remote Collaboration","authors":"Martin Feick, Anthony Tang, Scott Bateman","doi":"10.1145/3266037.3266102","DOIUrl":"https://doi.org/10.1145/3266037.3266102","url":null,"abstract":"In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129470901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.
{"title":"Haptic Interface Using Tendon Electrical Stimulation","authors":"Akifumi Takahashi, K. Tanabe, H. Kajimoto","doi":"10.1145/3266037.3271640","DOIUrl":"https://doi.org/10.1145/3266037.3271640","url":null,"abstract":"This demonstration corresponds to our previous paper, which deals with our finding that a proprioceptive force sensation can be presented by electrical stimulation from the skin surface to the tendon region (Tendon Electrical Stimulation: TES). We showed that TES can elicit a force sensation, and adjusting the current parameters can control the amount of the sensation. Unlike electrical muscle stimulation (EMS), which can also present force sensation by stimulating motor nerves to contract muscles, TES is thought to present a proprioceptive force sensation by stimulating receptors or sensory nerves responsible for recognizing the magnitude of the muscle contraction existing inside the tendon. In the demo, we offer the occasion for trying TES.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121415199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}