We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.
{"title":"A Stretch-Flexible Textile Multitouch Sensor for User Input on Inflatable Membrane Structures & Non-Planar Surfaces","authors":"Kristian Gohlke, E. Hornecker","doi":"10.1145/3266037.3271647","DOIUrl":"https://doi.org/10.1145/3266037.3271647","url":null,"abstract":"We present a textile sensor, capable of detecting multi-touch and multi-pressure input on non-planar surfaces and demonstrate how such sensors can be fabricated and integrated into pressure stabilized membrane envelopes (i.e. inflatables). Our sensor design is both stretchable and flexible/bendable and can conform to various three-dimensional surface geometries and shape-changing surfaces. We briefly outline an approach for basic signal acquisition from such sensors and how they can be leveraged to measure internal air-pressure of inflatable objects without specialized air-pressure sensors. We further demonstrate how standard electronic circuits can be integrated with malleable inflatable objects without the need for rigid enclosures for mechanical protection.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114094738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Can natural interaction requirements be fulfilled while still harnessing the "supernatural" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the "supernatural" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.
{"title":"Investigation into Natural Gestures Using EMG for \"SuperNatural\" Interaction in VR","authors":"Chloe Eghtebas, Sandro Weber, G. Klinker","doi":"10.1145/3266037.3266115","DOIUrl":"https://doi.org/10.1145/3266037.3266115","url":null,"abstract":"Can natural interaction requirements be fulfilled while still harnessing the \"supernatural\" fantasy of Virtual Reality (VR)? In this work we used off the shelf Electromyogram (EMG) sensors as an input device which can afford natural gestures to preform the \"supernatural\" task of growing your arm in VR. We recorded 18 participants preforming a simple retrieval task in two phases; an initial and a learning phase where the stretch arm was disabled and enabled respectively. The results show that the gestures used in the initial phase are different than the main gestures used to retrieve an object in our system and that the times taken to complete the learning phase are highly variable across participants.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115968162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto
In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.
{"title":"Wearable Haptic Device that Presents the Haptics Sensation Corresponding to Three Fingers on the Forearm","authors":"Taha K. Moriyama, Takuto Nakamura, Hiyoyuki Kajimoto","doi":"10.1145/3266037.3271633","DOIUrl":"https://doi.org/10.1145/3266037.3271633","url":null,"abstract":"In this demonstration, as an attempt of a new haptic presentation method for objects in virtual reality (VR) environment, we show a device that presents the haptic sensation of the fingertip on the forearm, not on the fingertip. This device adopts a five-bar linkage mechanism and it is possible to present the strength, direction of force. Compared with a fingertip mounted type displays, it is possible to address the issues of their weight and size which hinder the free movement of fingers. We have confirmed that the experiences in the VR environment is improved compared with without haptics cues situation regardless of without presenting haptics information directly to the fingertip.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"9 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124943693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio
Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of "gazed-at" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.
{"title":"Towards a Symbiotic Human-Machine Depth Sensor: Exploring 3D Gaze for Object Reconstruction","authors":"Teresa Hirzle, Jan Gugenheimer, Florian Geiselhart, A. Bulling, E. Rukzio","doi":"10.1145/3266037.3266119","DOIUrl":"https://doi.org/10.1145/3266037.3266119","url":null,"abstract":"Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved using depth cameras. In this work we aim to explore how 3D gaze data can be used to enhance scene understanding for AR HMDs by envisioning a symbiotic human-machine depth camera, fusing depth data with 3D gaze information. We present a first proof of concept, exploring to what extend we are able to recognise what a user is looking at by plotting 3D gaze data. To measure 3D gaze, we implemented a vergence-based algorithm and built an eye tracking setup consisting of a Pupil Labs headset and an OptiTrack motion capture system, allowing us to measure 3D gaze inside a 50x50x50 cm volume. We show first 3D gaze plots of \"gazed-at\" objects and describe our vision of a symbiotic human-machine depth camera that combines a depth camera and human 3D gaze information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121349969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.
{"title":"Hybrid Watch User Interfaces: Collaboration Between Electro-Mechanical Components and Analog Materials","authors":"A. Olwal","doi":"10.1145/3266037.3271650","DOIUrl":"https://doi.org/10.1145/3266037.3271650","url":null,"abstract":"We introduce programmable material and electro-mechanical control to enable a set of hybrid watch user interfaces that symbiotically leverage the joint strengths of electro-mechanical hands and a dynamic watch dial. This approach enables computation and connectivity with existing materials to preserve the inherent physical qualities and abilities of traditional analog watches. We augment the watch's mechanical hands with micro-stepper motors for control, positioning and mechanical expressivity. We extend the traditional watch dial with programmable pigments for non-emissive dynamic patterns. Together, these components enable a unique set of interaction techniques and user interfaces beyond their individual capabilities.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"152 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114048673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii
We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the "reminiscence".
{"title":"reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper","authors":"Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, H. Ishii","doi":"10.1145/3266037.3266109","DOIUrl":"https://doi.org/10.1145/3266037.3266109","url":null,"abstract":"We present a tangible memory notebook--reMi--that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device's 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the \"reminiscence\".","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131768946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.
{"title":"Post-literate Programming: Linking Discussion and Code in Software Development Teams","authors":"Soya Park, Amy X. Zhang, David R Karger","doi":"10.1145/3266037.3266098","DOIUrl":"https://doi.org/10.1145/3266037.3266098","url":null,"abstract":"The literate programming paradigm presents a program interleaved with natural language text explaining the code's rationale and logic. While this is great for program readers, the labor of creating literate programs deters most program authors from providing this text at authoring time. Instead, as we determine through interviews, developers provide their design rationales after the fact, in discussions with collaborators. We propose to capture these discussions and incorporate them into the code. We have prototyped a tool to link online discussion of code directly to the code it discusses. Incorporating these discussions incrementally creates post-literate programs that convey information to future developers.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134261814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro F. Campos, Diogo Cabral, Frederica Gonçalves
User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.
{"title":"Sense.Seat: Inducing Improved Mood and Cognition through Multisensorial Priming","authors":"Pedro F. Campos, Diogo Cabral, Frederica Gonçalves","doi":"10.1145/3266037.3266105","DOIUrl":"https://doi.org/10.1145/3266037.3266105","url":null,"abstract":"User interface software and technologies have been evolving significantly and rapidly. This poster presents a breakthrough user experience that leverages multisensorial priming and embedded interaction and introduces an interactive piece of furniture called Sense.Seat. Sensory stimuli such as calm colors, lavender and other scents as well as ambient soundscapes have been traditionally used to spark creativity and promote well-being. Sense.Seat is the first computational multisensorial seat that can be digitally controlled and vary the frequency and intensity of visual, auditory and olfactory stimulus. It is a new user interface shaped as a seat or pod that primes the user for inducing improved mood and cognition, therefore improving the work environment.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz
Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.
{"title":"Interactive Tangrami: Rapid Prototyping with Modular Paper-folded Electronics","authors":"Michael Wessely, Nadiya Morenko, Jürgen Steimle, M. Schmitz","doi":"10.1145/3266037.3271630","DOIUrl":"https://doi.org/10.1145/3266037.3271630","url":null,"abstract":"Prototyping interactive objects with personal fabrication tools like 3D printers requires the maker to create subsequent design artifacts from scratch which produces unnecessary waste and does not allow to reuse functional components. We present Interactive Tangrami, paper-folded and reusable building blocks (Tangramis) that can contain various sensor input and visual output capabilities. We propose a digital design toolkit that lets the user plan the shape and functionality of a design piece. The software manages the communication to the physical artifact and streams the interaction data via the Open Sound protocol (OSC) to an application prototyping system (e.g. MaxMSP). The building blocks are fabricated digitally with a rapid and inexpensive ink-jet printing method. Our systems allows to prototype physical user interfaces within minutes and without knowledge of the underlying technologies. We demo its usefulness with two application examples.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116952470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo
Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.
{"title":"Augmented Collaboration in Shared Space Design with Shared Attention and Manipulation","authors":"Yoonjeong Cha, Sungu Nam, M. Yi, Jaeseung Jeong, Woontack Woo","doi":"10.1145/3266037.3266086","DOIUrl":"https://doi.org/10.1145/3266037.3266086","url":null,"abstract":"Augmented collaboration in a shared house design scenario has been studied widely with various approaches. However, those studies did not consider human perception. Our goal is to lower the user's perceptual load for augmented collaboration in shared space design scenarios. Applying attention theories, we implemented shared head gaze, shared selected object, and collaborative manipulation features in our system in two different versions with HoloLens. To investigate whether user perceptions of the two different versions differ, we conducted an experiment with 18 participants (9 pairs) and conducted a survey and semi-structured interviews. The results did not show significant differences between the two versions, but produced interesting insights. Based on the findings, we provide design guidelines for collaborative AR systems.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123842323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}