Pneu-Multi-Tools is a novel passive haptic feedback (PHF) wearable device that let users grip different shapes of virtual props in virtual reality(VR) with sensing the shape changes of foldable airbags driven by pneumatics. The solution to the limitation of primitive shapes in haptic interfaces for VR in the past is proposed in this research. TPU films can be manufactured into 4 kinds of folding shapes(clip, rectangle, cylinder and cone shape) owing to the numbers and orientation of folding hinges on a single airbag. Therefore, Pneu-Multi-Tools, which is stacked with different folding shapes of airbags and capable of automatically folding, enables users to use multi-props intuitively in VR games. There are 3 interaction scenarios are provided by this interface: "Pick-Up", "Order" and "Hot Key" in multi-props games to make it possible for users to switch the props more efficiently.
pue - multi - tools是一种新型的被动触觉反馈(PHF)可穿戴设备,用户可以在虚拟现实(VR)中抓住不同形状的虚拟道具,感知气动驱动下可折叠安全气囊的形状变化。本研究解决了以往VR触觉界面中原始形状的局限性。根据单个安全气囊上折叠铰链的数量和方向,TPU薄膜可折叠成4种形状(夹子、矩形、圆柱形和锥形)。因此,堆叠了不同折叠形状的安全气囊,并具有自动折叠功能的pue - multi- tools,可以让用户在VR游戏中直观地使用多道具。该界面提供了多道具游戏中的“拾取”、“下单”和“热键”3种交互场景,使用户能够更高效地切换道具。
{"title":"Pneu-Multi-Tools: Auto-Folding and Multi-Shapes Interface by Pneumatics in Virtual Reality","authors":"Sheng-Pei Hu, June-Hao Hou","doi":"10.1145/3332167.3357107","DOIUrl":"https://doi.org/10.1145/3332167.3357107","url":null,"abstract":"Pneu-Multi-Tools is a novel passive haptic feedback (PHF) wearable device that let users grip different shapes of virtual props in virtual reality(VR) with sensing the shape changes of foldable airbags driven by pneumatics. The solution to the limitation of primitive shapes in haptic interfaces for VR in the past is proposed in this research. TPU films can be manufactured into 4 kinds of folding shapes(clip, rectangle, cylinder and cone shape) owing to the numbers and orientation of folding hinges on a single airbag. Therefore, Pneu-Multi-Tools, which is stacked with different folding shapes of airbags and capable of automatically folding, enables users to use multi-props intuitively in VR games. There are 3 interaction scenarios are provided by this interface: \"Pick-Up\", \"Order\" and \"Hot Key\" in multi-props games to make it possible for users to switch the props more efficiently.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122359294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Suzuki, Ryosuke Nakayama, Dan Liu, Y. Kakehi, M. Gross, Daniel Leithinger
This paper introduces LiftTiles, a modular and reconfigurable room-scale shape display. LiftTiles consist of an array of retractable and inflatable actuator that is compact (e.g., 15cm tall) and light (e.g., 1.8kg), while extending up to 1.5m to allow for large-scale shape transformation. Inflatable actuation also provides a robust structure that can support heavy objects (e.g., 10 kg weight). This paper describes the design and implementation of LiftTiles and explores the application space for reconfigurable room-scale shape displays.
{"title":"LiftTiles: Modular and Reconfigurable Room-scale Shape Displays through Retractable Inflatable Actuators","authors":"R. Suzuki, Ryosuke Nakayama, Dan Liu, Y. Kakehi, M. Gross, Daniel Leithinger","doi":"10.1145/3332167.3357105","DOIUrl":"https://doi.org/10.1145/3332167.3357105","url":null,"abstract":"This paper introduces LiftTiles, a modular and reconfigurable room-scale shape display. LiftTiles consist of an array of retractable and inflatable actuator that is compact (e.g., 15cm tall) and light (e.g., 1.8kg), while extending up to 1.5m to allow for large-scale shape transformation. Inflatable actuation also provides a robust structure that can support heavy objects (e.g., 10 kg weight). This paper describes the design and implementation of LiftTiles and explores the application space for reconfigurable room-scale shape displays.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133034370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a programming environment for prototyping workflows that consists of custom digital fabrication machines and user-defined interactions. At its core, Machine-o-Matic comprises a domain-specific programming language for defining custom CNC machines as aconfiguration of tools and moving stages connected together. Given a software defined machine configuration, the language compiles to firmware code that allows a user to control and test a physical machine immediately. The language includes constructs for users to define custom actions with the tool and to interface with input from sensors or a camera feed. To aid users in writing Machine-o-Matic programs, we include a drag and drop GUI for assembling, simulating, and experimenting with potential machine configurations before physically fabricating them. We present three proofs of concept to showcase the potential of our programming environment.
{"title":"Machine-o-Matic: A Programming Environment for Prototyping Digital Fabrication Workflows","authors":"Jasper Tran O'Leary, Nadya Peek","doi":"10.1145/3332167.3356897","DOIUrl":"https://doi.org/10.1145/3332167.3356897","url":null,"abstract":"We propose a programming environment for prototyping workflows that consists of custom digital fabrication machines and user-defined interactions. At its core, Machine-o-Matic comprises a domain-specific programming language for defining custom CNC machines as aconfiguration of tools and moving stages connected together. Given a software defined machine configuration, the language compiles to firmware code that allows a user to control and test a physical machine immediately. The language includes constructs for users to define custom actions with the tool and to interface with input from sensors or a camera feed. To aid users in writing Machine-o-Matic programs, we include a drag and drop GUI for assembling, simulating, and experimenting with potential machine configurations before physically fabricating them. We present three proofs of concept to showcase the potential of our programming environment.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117295848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
User interfaces in self-order terminals aim to satisfy the need for information of a broad audience and thus get easily clut-tered. Online shops present personalized product recommen-dations based on previously gathered user data to channel the user's attention. In contrast, stateless point-of-sales machines generally have no access to the user's personal information nor previous purchase behavior. User preferences must therefore be determined during the interaction. We thus propose using gaze data to determine preferences in real-time. In this paper we present a system for dynamic gaze-based fltering.
{"title":"Gaze-based Product Filtering: A System for Creating Adaptive User Interfaces to Personalize Stateless Point-of-Sale Machines","authors":"Melanie Heck, Janick Edinger, Christian Becker","doi":"10.1145/3332167.3357120","DOIUrl":"https://doi.org/10.1145/3332167.3357120","url":null,"abstract":"User interfaces in self-order terminals aim to satisfy the need for information of a broad audience and thus get easily clut-tered. Online shops present personalized product recommen-dations based on previously gathered user data to channel the user's attention. In contrast, stateless point-of-sales machines generally have no access to the user's personal information nor previous purchase behavior. User preferences must therefore be determined during the interaction. We thus propose using gaze data to determine preferences in real-time. In this paper we present a system for dynamic gaze-based fltering.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130212505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taeyong Kim, Sanghong Kim, Joonhee Choi, Youngsun Lee, Bowon Lee
Recent advances in computer vision and natural language processing using deep neural networks (DNNs) have enabled rich and intuitive multimodal interfaces. However, research on intelligent assistance systems for persons with visual impairment has not been well explored. In this work, we present an interactive object recognition and guidance interface based on multimodal interaction for blind and partially sighted people using an embedded mobile device. We demonstrate that the proposed solution using DNNs can effectively assist visually impaired people. We believe that this work will provide new and helpful insights for designing intelligent assistance systems in the future.
{"title":"Say and Find it: A Multimodal Wearable Interface for People with Visual Impairment","authors":"Taeyong Kim, Sanghong Kim, Joonhee Choi, Youngsun Lee, Bowon Lee","doi":"10.1145/3332167.3357104","DOIUrl":"https://doi.org/10.1145/3332167.3357104","url":null,"abstract":"Recent advances in computer vision and natural language processing using deep neural networks (DNNs) have enabled rich and intuitive multimodal interfaces. However, research on intelligent assistance systems for persons with visual impairment has not been well explored. In this work, we present an interactive object recognition and guidance interface based on multimodal interaction for blind and partially sighted people using an embedded mobile device. We demonstrate that the proposed solution using DNNs can effectively assist visually impaired people. We believe that this work will provide new and helpful insights for designing intelligent assistance systems in the future.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128831273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}