In this demonstration, we introduce Face/On, an embedded feedback device that leverages the contact area between the user's face and a virtual reality (VR) head-mounted display (HMD) to provide rich haptic feedback in virtual environments (VEs). Head-worn haptic feedback devices have been explored in previous work to provide directional cues via grids of actuators and localized feedback on the users' skin. Most of these solutions were immersion breaking due to their encumbering and uncomfortable design and build around a single actuator type, thus limiting the overall fidelity and flexibility of the haptic feedback. We present Face/On, a VR HMD face cushion with three types of discreetly embedded actuators that provide rich haptic feedback without encumbering users with invasive instrumentation on the body. By combining vibro-tactile and thermal feedback with electrical muscle stimulation (EMS), Face/On can simulate a wide range of scenarios and benefit from synergy effects between these feedback types.
{"title":"Face/On: Actuating the Facial Contact Area of a Head-Mounted Display for Increased Immersion","authors":"Dennis Wolf, Leo Hnatek, E. Rukzio","doi":"10.1145/3266037.3271631","DOIUrl":"https://doi.org/10.1145/3266037.3271631","url":null,"abstract":"In this demonstration, we introduce Face/On, an embedded feedback device that leverages the contact area between the user's face and a virtual reality (VR) head-mounted display (HMD) to provide rich haptic feedback in virtual environments (VEs). Head-worn haptic feedback devices have been explored in previous work to provide directional cues via grids of actuators and localized feedback on the users' skin. Most of these solutions were immersion breaking due to their encumbering and uncomfortable design and build around a single actuator type, thus limiting the overall fidelity and flexibility of the haptic feedback. We present Face/On, a VR HMD face cushion with three types of discreetly embedded actuators that provide rich haptic feedback without encumbering users with invasive instrumentation on the body. By combining vibro-tactile and thermal feedback with electrical muscle stimulation (EMS), Face/On can simulate a wide range of scenarios and benefit from synergy effects between these feedback types.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129092426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate a haptic feedback method to generate multiple virtual textures on analog buttons of the gamepad. The method utilizes the haptic illusion evoked from proper haptic cues in respect of the analog button's movement to change the perceived physical property of the button. Two types of analog buttons, joystick and trigger button on the gamepad is augmented with localized haptic feedback. We implemented two virtual textures for each type of analog button, and these textures could be programmatically controlled reflecting the dynamic game situations. We also demonstrate a two-player shooter game to show the dynamic texture representation of customized gamepad could enrich the game experience.
{"title":"Demonstrating Gamepad with Programmable Haptic Texture Analog Buttons","authors":"Y. Shim, Geehyuk Lee","doi":"10.1145/3266037.3271648","DOIUrl":"https://doi.org/10.1145/3266037.3271648","url":null,"abstract":"We demonstrate a haptic feedback method to generate multiple virtual textures on analog buttons of the gamepad. The method utilizes the haptic illusion evoked from proper haptic cues in respect of the analog button's movement to change the perceived physical property of the button. Two types of analog buttons, joystick and trigger button on the gamepad is augmented with localized haptic feedback. We implemented two virtual textures for each type of analog button, and these textures could be programmatically controlled reflecting the dynamic game situations. We also demonstrate a two-player shooter game to show the dynamic texture representation of customized gamepad could enrich the game experience.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present Companion, a software tool tailored towards improving and digitally supporting the pen-and-paper tabletop role-playing experience. Pen-and-paper role-playing games (P&P RPG) are a concept known since the early 1970s. Since then, the genre has attracted a massive community of players while branching out into several genres and P&P RPG systems to choose from. Due to the highly interactive and dynamic nature of the game, a participants individual impact on narrative and interactive aspects of the game is extremely high. The diversity of scenarios within this context unfold a variety of players needs, as well as factors limiting and enhancing game-play. Companion offers an audio management workspace for creation and playback of soundscapes based on visual layouting. It supports interactive image presentation and map exploration which can incorporate input from any device providing TUIO tracking data. Additionally, a mobile app was developed to be used as a remote control for media activation on the desktop host.
{"title":"Companion - A Software Toolkit for Digitally Aided Pen-and-Paper Tabletop Roleplaying","authors":"Sebastian Stickert, Hagen Hiller, Florian Echtler","doi":"10.1145/3266037.3266097","DOIUrl":"https://doi.org/10.1145/3266037.3266097","url":null,"abstract":"We present Companion, a software tool tailored towards improving and digitally supporting the pen-and-paper tabletop role-playing experience. Pen-and-paper role-playing games (P&P RPG) are a concept known since the early 1970s. Since then, the genre has attracted a massive community of players while branching out into several genres and P&P RPG systems to choose from. Due to the highly interactive and dynamic nature of the game, a participants individual impact on narrative and interactive aspects of the game is extremely high. The diversity of scenarios within this context unfold a variety of players needs, as well as factors limiting and enhancing game-play. Companion offers an audio management workspace for creation and playback of soundscapes based on visual layouting. It supports interactive image presentation and map exploration which can incorporate input from any device providing TUIO tracking data. Additionally, a mobile app was developed to be used as a remote control for media activation on the desktop host.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121536654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical buttons provide rich force characteristics during the travel range, which are commonly described in the form of force-displacement curves. These force characteristics play an important role in the users' experiences while pressing a button. However, due to lack of proper tools to dynamically render various force-displacement curves, little literature has tried iterative button design improvement. This paper presents Button Simulator, a low-cost 3D printed physical button capable of displaying any force-displacement curves, with limited average error offset around .034 N. By reading the force-displacement curves of existing push-buttons, we can easily replicate the force characteristics from any buttons onto our Button Simulator. One can even go beyond existing buttons and design non-existent ones as the form of arbitrary force-displacement curves; then use Button Simulator to render the sensation. This project will be open-sourced and the implementation details will be released. Our system can be a useful tool for future researchers, designers, and makers to investigate rich and dynamic button"s force design.
{"title":"One Button to Rule Them All: Rendering Arbitrary Force-Displacement Curves","authors":"Yi-Chi Liao, Sunjun Kim, Antti Oulasvirta","doi":"10.1145/3266037.3266118","DOIUrl":"https://doi.org/10.1145/3266037.3266118","url":null,"abstract":"Physical buttons provide rich force characteristics during the travel range, which are commonly described in the form of force-displacement curves. These force characteristics play an important role in the users' experiences while pressing a button. However, due to lack of proper tools to dynamically render various force-displacement curves, little literature has tried iterative button design improvement. This paper presents Button Simulator, a low-cost 3D printed physical button capable of displaying any force-displacement curves, with limited average error offset around .034 N. By reading the force-displacement curves of existing push-buttons, we can easily replicate the force characteristics from any buttons onto our Button Simulator. One can even go beyond existing buttons and design non-existent ones as the form of arbitrary force-displacement curves; then use Button Simulator to render the sensation. This project will be open-sourced and the implementation details will be released. Our system can be a useful tool for future researchers, designers, and makers to investigate rich and dynamic button\"s force design.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121030242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengqing Li, Shio Miyafuji, Toshiki Sato, H. Kuzuoka, H. Koike
We propose OmniEyeball (OEB), which is a novel interactive 360° image I/O system. It integrates the spherical display system with an omnidirectional camera to enable both capturing the 360° panoramic live streaming video as well as displaying it. We also present its unique application for symmetric 360° video communication by utilizing two OEB terminals, which may solve the narrow field-of-view problem in video communication. In addition, we designed a vision-based touch detection technique as well as some features to support 360° video communication.
{"title":"OmniEyeball: Spherical Display Equipped With Omnidirectional Camera And Its Application For 360-Degree Video Communication","authors":"Zhengqing Li, Shio Miyafuji, Toshiki Sato, H. Kuzuoka, H. Koike","doi":"10.1145/3266037.3266092","DOIUrl":"https://doi.org/10.1145/3266037.3266092","url":null,"abstract":"We propose OmniEyeball (OEB), which is a novel interactive 360° image I/O system. It integrates the spherical display system with an omnidirectional camera to enable both capturing the 360° panoramic live streaming video as well as displaying it. We also present its unique application for symmetric 360° video communication by utilizing two OEB terminals, which may solve the narrow field-of-view problem in video communication. In addition, we designed a vision-based touch detection technique as well as some features to support 360° video communication.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"133 36","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132800709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People often run into barriers when doing creative tasks with software because it is difficult to translate goals into concrete actions. While expert-made tutorials, examples, and documentation abound online, finding the most relevant content and adapting it to one's own situation and task is a challenge. My research introduces techniques for exposing relevant examples to novices in the context of their own workflows. These techniques are embodied in three systems. The first, RePlay, helps people find solutions when stuck by automatically locating relevant moments from expert-made videos. The second, DiscoverySpace, helps novices get started by mining and recommending expert-made software macros. The third, CritiqueKit, helps novices improve their work by providing ambient guidance and recommendations. Preliminary experiments with RePlay suggest that contextual video clips help people complete targeted tasks. Controlled experiments with DiscoverySpace and CritiqueKit demonstrate that software macros prevent novices from losing confidence, and ambient guidance improves novice output. My research illustrates the power of user communities to support creative learning.
{"title":"The Right Content at the Right Time: Contextual Examples for Just-in-time Creative Learning","authors":"C. Fraser","doi":"10.1145/3266037.3266127","DOIUrl":"https://doi.org/10.1145/3266037.3266127","url":null,"abstract":"People often run into barriers when doing creative tasks with software because it is difficult to translate goals into concrete actions. While expert-made tutorials, examples, and documentation abound online, finding the most relevant content and adapting it to one's own situation and task is a challenge. My research introduces techniques for exposing relevant examples to novices in the context of their own workflows. These techniques are embodied in three systems. The first, RePlay, helps people find solutions when stuck by automatically locating relevant moments from expert-made videos. The second, DiscoverySpace, helps novices get started by mining and recommending expert-made software macros. The third, CritiqueKit, helps novices improve their work by providing ambient guidance and recommendations. Preliminary experiments with RePlay suggest that contextual video clips help people complete targeted tasks. Controlled experiments with DiscoverySpace and CritiqueKit demonstrate that software macros prevent novices from losing confidence, and ambient guidance improves novice output. My research illustrates the power of user communities to support creative learning.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sharon Zhou, Tong Mu, Karan Goel, Michael S. Bernstein, E. Brunskill
Across many domains, interactive systems either make decisions for us autonomously or yield decision-making authority to us and play a supporting role. However, many settings, such as those in education or the workplace, benefit from sharing this autonomy between the user and the system, and thus from a system that adapts to them over time. In this paper, we pursue two primary research questions: (1) How do we design interfaces to share autonomy between the user and the system? (2) How does shared autonomy alter a user"s perception of a system? We present SharedKeys, an interactive shared autonomy system for piano instruction that plays different video segments of a piece for students to emulate and practice. Underlying our approach to shared autonomy is a mixed-observability Markov decision process that estimates a user"s desired autonomy level based on her performance and attentiveness. Pilot studies revealed that students sharing autonomy with the system learned more quickly and perceived the system as more intelligent.
{"title":"Shared Autonomy for an Interactive AI System","authors":"Sharon Zhou, Tong Mu, Karan Goel, Michael S. Bernstein, E. Brunskill","doi":"10.1145/3266037.3266088","DOIUrl":"https://doi.org/10.1145/3266037.3266088","url":null,"abstract":"Across many domains, interactive systems either make decisions for us autonomously or yield decision-making authority to us and play a supporting role. However, many settings, such as those in education or the workplace, benefit from sharing this autonomy between the user and the system, and thus from a system that adapts to them over time. In this paper, we pursue two primary research questions: (1) How do we design interfaces to share autonomy between the user and the system? (2) How does shared autonomy alter a user\"s perception of a system? We present SharedKeys, an interactive shared autonomy system for piano instruction that plays different video segments of a piece for students to emulate and practice. Underlying our approach to shared autonomy is a mixed-observability Markov decision process that estimates a user\"s desired autonomy level based on her performance and attentiveness. Pilot studies revealed that students sharing autonomy with the system learned more quickly and perceived the system as more intelligent.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114796951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carmine Elvezio, Frank Ling, Jen-Shuo Liu, Steven K. Feiner
In collaborative virtual environments, users must often perform tasks requiring coordinated action between multiple parties. Some cases are symmetric, in which users work together on equal footing, while others are asymmetric, in which one user may have more experience or capabilities than another (e.g., one may guide another in completing a task). We present a multi-user virtual reality system that supports interactions of both these types. Two collaborating users, whether co-located or remote, simultaneously manipulate the same virtual objects in a physics simulation, in tasks that require low latency networking to perform successfully. We are currently applying this approach to motor rehabilitation, in which a therapist and patient work together.
{"title":"Collaborative Virtual Reality for Low-Latency Interaction","authors":"Carmine Elvezio, Frank Ling, Jen-Shuo Liu, Steven K. Feiner","doi":"10.1145/3266037.3271643","DOIUrl":"https://doi.org/10.1145/3266037.3271643","url":null,"abstract":"In collaborative virtual environments, users must often perform tasks requiring coordinated action between multiple parties. Some cases are symmetric, in which users work together on equal footing, while others are asymmetric, in which one user may have more experience or capabilities than another (e.g., one may guide another in completing a task). We present a multi-user virtual reality system that supports interactions of both these types. Two collaborating users, whether co-located or remote, simultaneously manipulate the same virtual objects in a physics simulation, in tasks that require low latency networking to perform successfully. We are currently applying this approach to motor rehabilitation, in which a therapist and patient work together.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122547165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a wearable soft exoskeleton sleeve based on PGM. The sleeve consists of 4 PGMs is controlled by a computing system and can actuate 4 different movements (hand extension, flexion, pronation and supination). Depending on how strong the actuation is, the user feels a slight force (haptic feedback) or the hand moves (if the users relaxes the muscles). The paper gives details about the system implementation, the interaction space and some ideas about application scenarios.
{"title":"Artificial Motion Guidance: an Intuitive Device based on Pneumatic Gel Muscle (PGM)","authors":"Takashi Goto, Swagata Das, Y. Kurita, K. Kunze","doi":"10.1145/3266037.3271644","DOIUrl":"https://doi.org/10.1145/3266037.3271644","url":null,"abstract":"We present a wearable soft exoskeleton sleeve based on PGM. The sleeve consists of 4 PGMs is controlled by a computing system and can actuate 4 different movements (hand extension, flexion, pronation and supination). Depending on how strong the actuation is, the user feels a slight force (haptic feedback) or the hand moves (if the users relaxes the muscles). The paper gives details about the system implementation, the interaction space and some ideas about application scenarios.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"492 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132055719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evgeny Stemasov, Gabriel Haas, Michael Rietzler, E. Rukzio
To filter and shut out an increasingly loud environment, many resort to the use of personal audio technology. They drown out unwanted sounds, by wearing headphones. This uniform interaction with all surrounding sounds can have a negative impact on social relations and situational awareness. Leveraging mediation through smarter headphones, users gain more agency over their sense of hearing: For instance by being able to selectively alter the volume and other features of specific sounds, without losing the ability to add media. In this work, we propose the vision of interactive auditory mediated reality (AMR). To understand users' attitude and requirements, we conducted a week-long event sampling study (n = 12), where users recorded and rated sources (n = 225) which they would like to mute, amplify or turn down. The results indicate that besides muting, a distinct, "quiet-but-audible" volume exists. It caters to two requirements at the same time: aesthetics/comfort and information acquisition.
{"title":"Augmenting Human Hearing Through Interactive Auditory Mediated Reality","authors":"Evgeny Stemasov, Gabriel Haas, Michael Rietzler, E. Rukzio","doi":"10.1145/3266037.3266104","DOIUrl":"https://doi.org/10.1145/3266037.3266104","url":null,"abstract":"To filter and shut out an increasingly loud environment, many resort to the use of personal audio technology. They drown out unwanted sounds, by wearing headphones. This uniform interaction with all surrounding sounds can have a negative impact on social relations and situational awareness. Leveraging mediation through smarter headphones, users gain more agency over their sense of hearing: For instance by being able to selectively alter the volume and other features of specific sounds, without losing the ability to add media. In this work, we propose the vision of interactive auditory mediated reality (AMR). To understand users' attitude and requirements, we conducted a week-long event sampling study (n = 12), where users recorded and rated sources (n = 225) which they would like to mute, amplify or turn down. The results indicate that besides muting, a distinct, \"quiet-but-audible\" volume exists. It caters to two requirements at the same time: aesthetics/comfort and information acquisition.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115580029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}