Michael Rietzler, Gabriel Haas, Thomas Dreja, Florian Geiselhart, E. Rukzio
Natural haptic feedback in virtual reality (VR) is complex andchallenging, due to the intricacy of necessary stimuli and re-spective hardware. Pseudo-haptic feedback aims at providinghaptic feedback without providing actual haptic stimuli butby using other sensory channels (e.g. visual cues) for feed-back. We combine such an approach with the additional inputmodality of muscle activity that is mapped to a virtual force toinfluence the interaction flow. In comparison to existing approaches as well as to no kines-thetic feedback at all the presented solution significantly in-creased immersion, enjoyment as well as the perceived qualityof kinesthetic feedback.
{"title":"Virtual Muscle Force: Communicating Kinesthetic Forces Through Pseudo-Haptic Feedback and Muscle Input","authors":"Michael Rietzler, Gabriel Haas, Thomas Dreja, Florian Geiselhart, E. Rukzio","doi":"10.1145/3332165.3347871","DOIUrl":"https://doi.org/10.1145/3332165.3347871","url":null,"abstract":"Natural haptic feedback in virtual reality (VR) is complex andchallenging, due to the intricacy of necessary stimuli and re-spective hardware. Pseudo-haptic feedback aims at providinghaptic feedback without providing actual haptic stimuli butby using other sensory channels (e.g. visual cues) for feed-back. We combine such an approach with the additional inputmodality of muscle activity that is mapped to a virtual force toinfluence the interaction flow. In comparison to existing approaches as well as to no kines-thetic feedback at all the presented solution significantly in-creased immersion, enjoyment as well as the perceived qualityof kinesthetic feedback.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131249240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vascular network reconstruction is an essential aspect of the daily practice of medical doctors working with vascular systems. Accurately representing vascular networks, not only graphically but also in a way that encompasses their structure, can be used to run simulations, plan medical procedures or identify real-life diseases, for example. A vascular network is thus reconstructed from a 3D medical image sequence via segmentation and skeletonization. Many automatic algorithms exist to do so but tend to fail for specific corner cases. On the other hand, manual methods exist as well but are tedious to use and require a lot of time. In this paper, we introduce an interactive vascular network reconstruction system called INVANER that relies on a graph-like representation of the network's structure. A general skeleton is obtained with an automatic method and medical practitioners are allowed to manually repair the local defects where this method fails. Our system uses graph-related tools with local effects and introduces two novel tools, dedicated to solving two common problems arising when automatically extracting the centerlines of vascular structures: so-called "Kissing Vessels" and a type of phenomenon we call "Dotted Vessels."
{"title":"INVANER: INteractive VAscular Network Editing and Repair","authors":"Valentin Z. Nigolian, T. Igarashi, Hirofumi Seo","doi":"10.1145/3332165.3347900","DOIUrl":"https://doi.org/10.1145/3332165.3347900","url":null,"abstract":"Vascular network reconstruction is an essential aspect of the daily practice of medical doctors working with vascular systems. Accurately representing vascular networks, not only graphically but also in a way that encompasses their structure, can be used to run simulations, plan medical procedures or identify real-life diseases, for example. A vascular network is thus reconstructed from a 3D medical image sequence via segmentation and skeletonization. Many automatic algorithms exist to do so but tend to fail for specific corner cases. On the other hand, manual methods exist as well but are tedious to use and require a lot of time. In this paper, we introduce an interactive vascular network reconstruction system called INVANER that relies on a graph-like representation of the network's structure. A general skeleton is obtained with an automatic method and medical practitioners are allowed to manually repair the local defects where this method fails. Our system uses graph-related tools with local effects and introduces two novel tools, dedicated to solving two common problems arising when automatically extracting the centerlines of vascular structures: so-called \"Kissing Vessels\" and a type of phenomenon we call \"Dotted Vessels.\"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130600262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 2B: Media Authoring","authors":"T. Igarashi","doi":"10.1145/3368372","DOIUrl":"https://doi.org/10.1145/3368372","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"14 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116932183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 5B: Physical Displays","authors":"Chris Harrison","doi":"10.1145/3368378","DOIUrl":"https://doi.org/10.1145/3368378","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124389530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun Gong, Yu Wu, Lei Yan, T. Seyed, Xing-Dong Yang
We present Tessutivo, a contact-based inductive sensing technique for contextual interactions on interactive fabrics. Our technique recognizes conductive objects (mainly metallic) that are commonly found in households and workplaces, such as keys, coins, and electronic devices. We built a prototype containing six by six spiral-shaped coils made of conductive thread, sewn onto a four-layer fabric structure. We carefully designed the coil shape parameters to maximize the sensitivity based on a new inductance approximation formula. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 27 common objects. We yielded 93.9% real-time accuracy for object recognition. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique.
{"title":"Tessutivo","authors":"Jun Gong, Yu Wu, Lei Yan, T. Seyed, Xing-Dong Yang","doi":"10.1145/3332165.3347897","DOIUrl":"https://doi.org/10.1145/3332165.3347897","url":null,"abstract":"We present Tessutivo, a contact-based inductive sensing technique for contextual interactions on interactive fabrics. Our technique recognizes conductive objects (mainly metallic) that are commonly found in households and workplaces, such as keys, coins, and electronic devices. We built a prototype containing six by six spiral-shaped coils made of conductive thread, sewn onto a four-layer fabric structure. We carefully designed the coil shape parameters to maximize the sensitivity based on a new inductance approximation formula. Through a ten-participant study, we evaluated the performance of our proposed sensing technique across 27 common objects. We yielded 93.9% real-time accuracy for object recognition. We conclude by presenting several applications to demonstrate the unique interactions enabled by our technique.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114376544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding the selection uncertainty of moving targets is a fundamental research problem in HCI. However, the only few works in this domain mainly focus on selecting 1D moving targets with certain input devices, where the model generalizability has not been extensively investigated. In this paper, we propose a 2D Ternary-Gaussian model to describe the selection uncertainty manifested in endpoint distribution for moving target selection. We explore and compare two candidate methods to generalize the problem space from 1D to 2D tasks, and evaluate their performances with three input modalities including mouse, stylus, and finger touch. By applying the proposed model in assisting target selection, we achieved up to 4% improvement in pointing speed and 41% in pointing accuracy compared with two state-of-the-art selection technologies. In addition, when we tested our model to predict pointing errors in a realistic user interface, we observed high fit of 0.94 R2.
{"title":"Modeling the Uncertainty in 2D Moving Target Selection","authors":"Jin Huang, Feng Tian, Nianlong Li, Xiangmin Fan","doi":"10.1145/3332165.3347880","DOIUrl":"https://doi.org/10.1145/3332165.3347880","url":null,"abstract":"Understanding the selection uncertainty of moving targets is a fundamental research problem in HCI. However, the only few works in this domain mainly focus on selecting 1D moving targets with certain input devices, where the model generalizability has not been extensively investigated. In this paper, we propose a 2D Ternary-Gaussian model to describe the selection uncertainty manifested in endpoint distribution for moving target selection. We explore and compare two candidate methods to generalize the problem space from 1D to 2D tasks, and evaluate their performances with three input modalities including mouse, stylus, and finger touch. By applying the proposed model in assisting target selection, we achieved up to 4% improvement in pointing speed and 41% in pointing accuracy compared with two state-of-the-art selection technologies. In addition, when we tested our model to predict pointing errors in a realistic user interface, we observed high fit of 0.94 R2.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115207267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Herscher, Connor DeFanti, N. Vitovitch, Corinne Brenner, Haijun Xia, Kris Layng, Ken Perlin
The virtual reality ecosystem has gained momentum in the gaming, entertainment, and enterprise markets, but is hampered by limitations in concurrent user count, throughput, and accessibility to mass audiences. Based on our analysis of the current state of the virtual reality ecosystem and relevant aspects of traditional media, we propose a set of design hypotheses for practical and effective seated virtual reality experiences of scale. Said hypotheses manifest in the Collective Audience Virtual Reality Nexus (CAVRN), a framework and management system for large-scale (30+ user) virtual reality deployment in a theater-like physical setting. A mixed methodology study of CAVE, an experience implemented using CAVRN, generated rich insights into the proposed hypotheses. We discuss the implications of our findings on content design, audience representation, and audience interaction.
{"title":"CAVRN","authors":"Sebastian Herscher, Connor DeFanti, N. Vitovitch, Corinne Brenner, Haijun Xia, Kris Layng, Ken Perlin","doi":"10.1145/3332165.3347929","DOIUrl":"https://doi.org/10.1145/3332165.3347929","url":null,"abstract":"The virtual reality ecosystem has gained momentum in the gaming, entertainment, and enterprise markets, but is hampered by limitations in concurrent user count, throughput, and accessibility to mass audiences. Based on our analysis of the current state of the virtual reality ecosystem and relevant aspects of traditional media, we propose a set of design hypotheses for practical and effective seated virtual reality experiences of scale. Said hypotheses manifest in the Collective Audience Virtual Reality Nexus (CAVRN), a framework and management system for large-scale (30+ user) virtual reality deployment in a theater-like physical setting. A mixed methodology study of CAVE, an experience implemented using CAVRN, generated rich insights into the proposed hypotheses. We discuss the implications of our findings on content design, audience representation, and audience interaction.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125055918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Gebhardt, Brian Hecox, B. V. Opheusden, Daniel J. Wigdor, James M. Hillis, Otmar Hilliges, Hrvoje Benko
An ideal Mixed Reality (MR) system would only present virtual information (e.g., a label) when it is useful to the person. However, deciding when a label is useful is challenging: it depends on a variety of factors, including the current task, previous knowledge, context, etc. In this paper, we propose a Reinforcement Learning (RL) method to learn when to show or hide an object's label given eye movement data. We demonstrate the capabilities of this approach by showing that an intelligent agent can learn cooperative policies that better support users in a visual search task than manually designed heuristics. Furthermore, we show the applicability of our approach to more realistic environments and use cases (e.g., grocery shopping). By posing MR object labeling as a model-free RL problem, we can learn policies implicitly by observing users' behavior without requiring a visual search model or data annotation.
{"title":"Learning Cooperative Personalized Policies from Gaze Data","authors":"Christoph Gebhardt, Brian Hecox, B. V. Opheusden, Daniel J. Wigdor, James M. Hillis, Otmar Hilliges, Hrvoje Benko","doi":"10.1145/3332165.3347933","DOIUrl":"https://doi.org/10.1145/3332165.3347933","url":null,"abstract":"An ideal Mixed Reality (MR) system would only present virtual information (e.g., a label) when it is useful to the person. However, deciding when a label is useful is challenging: it depends on a variety of factors, including the current task, previous knowledge, context, etc. In this paper, we propose a Reinforcement Learning (RL) method to learn when to show or hide an object's label given eye movement data. We demonstrate the capabilities of this approach by showing that an intelligent agent can learn cooperative policies that better support users in a visual search task than manually designed heuristics. Furthermore, we show the applicability of our approach to more realistic environments and use cases (e.g., grocery shopping). By posing MR object labeling as a model-free RL problem, we can learn policies implicitly by observing users' behavior without requiring a visual search model or data annotation.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127384974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents RFTouchPads, a system of batteryless and wireless modular hardware designs of two-dimensional (2D) touch sensor pads based on the ultra-high frequency (UHF) radio-frequency identification (RFID) technology. In this system, multiple RFID IC chips are connected to an antenna in parallel. Each chip connects only one of its endpoints to the antenna; hence, the module normally turns off when it gets insufficient energy to operate. When a finger touches the circuit trace attached to another endpoint of the chip, the finger functions as part of the antenna that turns the connected chip on, while the finger touch location is determined according to the chip's ID. Based on this principle, we propose two hardware designs, namely, StickerPad and TilePad. StickerPad is a flexible 3×3 touch-sensing pad suitable for applications on curved surfaces such as the human body. TilePad is a modular 3×3 touch-sensing pad that supports the modular area expansion by tiling and provides a more flexible deployment because its antenna is folded. Our implementation allows 2D touch inputs to be reliability detected 2 m away from a remote antenna of an RFID reader. The proposed batteryless, wireless, and modular hardware design enables fine-grained and less-constrained 2D touch inputs in various ubiquitous computing applications.
{"title":"RFTouchPads","authors":"Meng-Ju Hsieh, Jr-Ling Guo, Chin-Yuan Lu, Han-Wei Hsieh, Rong-Hao Liang, Bing-Yu Chen","doi":"10.1145/3332165.3347910","DOIUrl":"https://doi.org/10.1145/3332165.3347910","url":null,"abstract":"This paper presents RFTouchPads, a system of batteryless and wireless modular hardware designs of two-dimensional (2D) touch sensor pads based on the ultra-high frequency (UHF) radio-frequency identification (RFID) technology. In this system, multiple RFID IC chips are connected to an antenna in parallel. Each chip connects only one of its endpoints to the antenna; hence, the module normally turns off when it gets insufficient energy to operate. When a finger touches the circuit trace attached to another endpoint of the chip, the finger functions as part of the antenna that turns the connected chip on, while the finger touch location is determined according to the chip's ID. Based on this principle, we propose two hardware designs, namely, StickerPad and TilePad. StickerPad is a flexible 3×3 touch-sensing pad suitable for applications on curved surfaces such as the human body. TilePad is a modular 3×3 touch-sensing pad that supports the modular area expansion by tiling and provides a more flexible deployment because its antenna is folded. Our implementation allows 2D touch inputs to be reliability detected 2 m away from a remote antenna of an RFID reader. The proposed batteryless, wireless, and modular hardware design enables fine-grained and less-constrained 2D touch inputs in various ubiquitous computing applications.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128018282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-17DOI: 10.1163/2214-8647_dnp_e721650
G. Barnaby, A. Roudaut
{"title":"Mantis","authors":"G. Barnaby, A. Roudaut","doi":"10.1163/2214-8647_dnp_e721650","DOIUrl":"https://doi.org/10.1163/2214-8647_dnp_e721650","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125634096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}