Roger Boldu, Sambhav Jain, J. P. F. Cortés, Haimo Zhang, Suranga Nanayakkara
In this paper, we present M-Hair, a novel method for providing tactile feedback by stimulating only the body hair without touching the skin. It works by applying passive magnetic materials to the body hair, which is actuated by external magnetic fields. Our user study suggested that the value of the M-hair mechanism is in inducing affective sensations such as pleasantness, rather than effectively discriminating features such as shape, size, and direction. This work invites future research to use this method in applications that induce emotional responses or affective states, and as a research tool for investigations of this novel sensation.
{"title":"M-Hair: Creating Novel Tactile Feedback by Augmenting the Body Hair to Respond to Magnetic Field","authors":"Roger Boldu, Sambhav Jain, J. P. F. Cortés, Haimo Zhang, Suranga Nanayakkara","doi":"10.1145/3332165.3347955","DOIUrl":"https://doi.org/10.1145/3332165.3347955","url":null,"abstract":"In this paper, we present M-Hair, a novel method for providing tactile feedback by stimulating only the body hair without touching the skin. It works by applying passive magnetic materials to the body hair, which is actuated by external magnetic fields. Our user study suggested that the value of the M-hair mechanism is in inducing affective sensations such as pleasantness, rather than effectively discriminating features such as shape, size, and direction. This work invites future research to use this method in applications that induce emotional responses or affective states, and as a research tool for investigations of this novel sensation.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132318328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Klokmose, C. Rémy, Janus Bager Kristensen, Rolf Bagge, Michel Beaudouin-Lafon, W. Mackay
We present Videostrates, a concept and a toolkit for creating real-time collaborative video editing tools. Videostrates supports both live and recorded video composition with a declarative HTML-based notation, combining both simple and sophisticated editing tools that can be used collaboratively. Videostrates is programmable and unleashes the power of the modern web platform for video manipulation. We demonstrate its potential through three use scenarios: collaborative video editing with multiple tools and devices; orchestration of multiple live streams that are recorded and broadcast to a popular streaming platform; and programmatic creation of video using WebGL and shaders for blue screen effects. These scenarios only scratch the surface of Videostrates' potential, which opens up a design space for novel collaborative video editors with fully programmable interfaces.
{"title":"Videostrates","authors":"C. Klokmose, C. Rémy, Janus Bager Kristensen, Rolf Bagge, Michel Beaudouin-Lafon, W. Mackay","doi":"10.1145/3332165.3347912","DOIUrl":"https://doi.org/10.1145/3332165.3347912","url":null,"abstract":"We present Videostrates, a concept and a toolkit for creating real-time collaborative video editing tools. Videostrates supports both live and recorded video composition with a declarative HTML-based notation, combining both simple and sophisticated editing tools that can be used collaboratively. Videostrates is programmable and unleashes the power of the modern web platform for video manipulation. We demonstrate its potential through three use scenarios: collaborative video editing with multiple tools and devices; orchestration of multiple live streams that are recorded and broadcast to a popular streaming platform; and programmatic creation of video using WebGL and shaders for blue screen effects. These scenarios only scratch the surface of Videostrates' potential, which opens up a design space for novel collaborative video editors with fully programmable interfaces.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115200521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adding electronics to textiles can be time-consuming and requires technical expertise. We introduce SensorSnaps, low-power wireless sensor nodes that seamlessly integrate into caps of fabric snap fasteners. SensorSnaps provide a new technique to quickly and intuitively augment any location on the clothing with sensing capabilities. SensorSnaps securely attach and detach from ubiquitous commercial snap fasteners. Using inertial measurement units, the SensorSnaps detect tap and rotation gestures, as well as track body motion. We optimized the power consumption for SensorSnaps to work continuously for 45 minutes and up to 4 hours in capacitive touch standby mode. We present applications in which the SensorSnaps are used as gestural interfaces for a music player controller, cursor control, and motion tracking suit. The user study showed that SensorSnap could be attached in around 71 seconds, similar to attaching off-the-shelf snaps, and participants found the gestures easy to learn and perform. SensorSnaps could allow anyone to effortlessly add sophisticated sensing capacities to ubiquitous snap fasteners.
{"title":"SensorSnaps","authors":"A. Dementyev, Tomás Vega Gálvez, A. Olwal","doi":"10.1145/3332165.3347913","DOIUrl":"https://doi.org/10.1145/3332165.3347913","url":null,"abstract":"Adding electronics to textiles can be time-consuming and requires technical expertise. We introduce SensorSnaps, low-power wireless sensor nodes that seamlessly integrate into caps of fabric snap fasteners. SensorSnaps provide a new technique to quickly and intuitively augment any location on the clothing with sensing capabilities. SensorSnaps securely attach and detach from ubiquitous commercial snap fasteners. Using inertial measurement units, the SensorSnaps detect tap and rotation gestures, as well as track body motion. We optimized the power consumption for SensorSnaps to work continuously for 45 minutes and up to 4 hours in capacitive touch standby mode. We present applications in which the SensorSnaps are used as gestural interfaces for a music player controller, cursor control, and motion tracking suit. The user study showed that SensorSnap could be attached in around 71 seconds, similar to attaching off-the-shelf snaps, and participants found the gestures easy to learn and perform. SensorSnaps could allow anyone to effortlessly add sophisticated sensing capacities to ubiquitous snap fasteners.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116319579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael J. Laielli, James Smith, Giscard Biamby, Trevor Darrell, B. Hartmann
Computer vision is applied in an ever expanding range of applications, many of which require custom training data to perform well. We present a novel interface for rapid collection of labeled training images to improve CV-based object detectors. LabelAR leverages the spatial tracking capabilities of an AR-enabled camera, allowing users to place persistent bounding volumes that stay centered on real-world objects. The interface then guides the user to move the camera to cover a wide variety of viewpoints. We eliminate the need for post hoc labeling of images by automatically projecting 2D bounding boxes around objects in the images as they are captured from AR-marked viewpoints. In a user study with 12 participants, LabelAR significantly outperforms existing approaches in terms of the trade-off between detection performance and collection time.
{"title":"LabelAR","authors":"Michael J. Laielli, James Smith, Giscard Biamby, Trevor Darrell, B. Hartmann","doi":"10.1145/3332165.3347927","DOIUrl":"https://doi.org/10.1145/3332165.3347927","url":null,"abstract":"Computer vision is applied in an ever expanding range of applications, many of which require custom training data to perform well. We present a novel interface for rapid collection of labeled training images to improve CV-based object detectors. LabelAR leverages the spatial tracking capabilities of an AR-enabled camera, allowing users to place persistent bounding volumes that stay centered on real-world objects. The interface then guides the user to move the camera to cover a wide variety of viewpoints. We eliminate the need for post hoc labeling of images by automatically projecting 2D bounding boxes around objects in the images as they are captured from AR-marked viewpoints. In a user study with 12 participants, LabelAR significantly outperforms existing approaches in terms of the trade-off between detection performance and collection time.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116237526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research was conducted regarding a display that presents digital information using bubbles. Conventional bubble displays require moving parts, because it is common to use air taken from outside of the water to represent pixels. However, it is difficult to increase the number of pixels at a low cost. We propose a liquid-surface display using pixels of bubble clusters generated from electrolysis, and present the cup-type device BubBowl, which generates a 10×10 pixel dot matrix pattern on the surface of a beverage. Our technique requires neither a gas supply from the outside nor moving parts. Using the proposed electrolysis method, a higher-resolution display can easily be realized using a PCB with a higher density of matrix electrodes.Moreover, the method is simple and practical, and can be utilized in daily life, such as for presenting information using bubbles on the surface of coffee in a cup.
{"title":"BubBowl: Display Vessel Using Electrolysis Bubbles in Drinkable Beverages","authors":"Ayaka Ishii, I. Siio","doi":"10.1145/3332165.3347923","DOIUrl":"https://doi.org/10.1145/3332165.3347923","url":null,"abstract":"Research was conducted regarding a display that presents digital information using bubbles. Conventional bubble displays require moving parts, because it is common to use air taken from outside of the water to represent pixels. However, it is difficult to increase the number of pixels at a low cost. We propose a liquid-surface display using pixels of bubble clusters generated from electrolysis, and present the cup-type device BubBowl, which generates a 10×10 pixel dot matrix pattern on the surface of a beverage. Our technique requires neither a gas supply from the outside nor moving parts. Using the proposed electrolysis method, a higher-resolution display can easily be realized using a PCB with a higher density of matrix electrodes.Moreover, the method is simple and practical, and can be utilized in daily life, such as for presenting information using bubbles on the surface of coffee in a cup.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116579067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 9B: 3D and VR Input","authors":"David Lindbauer","doi":"10.1145/3368386","DOIUrl":"https://doi.org/10.1145/3368386","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121512167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.
{"title":"Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection","authors":"Ludwig Sidenmark, Hans-Werner Gellersen","doi":"10.1145/3332165.3347921","DOIUrl":"https://doi.org/10.1145/3332165.3347921","url":null,"abstract":"Eye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and identify design principles for Eye&Head gaze interaction. We introduce three novel techniques that build on the distinction of head-supported versus eyes-only gaze, to enable dynamic coupling of gaze and pointer, hover interaction, visual exploration around pre-selections, and iterative and fast confirmation of targets. We demonstrate Eye&Head interaction on applications in virtual reality, and evaluate our techniques against baselines in pointing and confirmation studies. Our results show that Eye&Head techniques enable novel gaze behaviours that provide users with more control and flexibility in fast gaze pointing and selection.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124655362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, H. Ishii
This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at millimeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise millifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in designing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new design opportunities that is unique at millimeter scale for product and interaction design.
{"title":"milliMorph -- Fluid-Driven Thin Film Shape-Change Materials for Interaction Design","authors":"Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, H. Ishii","doi":"10.1145/3332165.3347956","DOIUrl":"https://doi.org/10.1145/3332165.3347956","url":null,"abstract":"This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at millimeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise millifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in designing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new design opportunities that is unique at millimeter scale for product and interaction design.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takatoshi Yoshida, Xiaoyan Shen, Koichi Yoshino, Ken Nakagaki, H. Ishii
ANDRIAN DJAMALU. Strategy for the Development of Small scale Smoking Roa Fish (Hemiramphus sp.) industry In Boalemo District, Gorontalo Province. (Supervised by Sitti Nur Faridah and Muh. Hatta Jamil). About 95% of the demand for smoked Roa fish in the province of Gorontalo comes from outside the province and the remaining 5% provided by the roa fish smoking industry in Boalemo Regency. These conditions are affected by the lacks of production capacity, production facilities, and capital owned by small businesses. This study aims to analyze the current conditions of the small scale roa smoking industry, conduct financial feasibility analysis, and formulate development strategy for the small scale roa fish smoking industry. The research method used was qualitative and quantitative research with data collection techniques in the form of interviews, documentation, and SWOT analysis. Through the NPV, IRR, BCR, PP, and BEP values, the fasibility of the roa smoking industry was determined. Based on the results obtained from SWOT analysis, it was found that the strength-opportunity strategy had the highest score. Policies to support this development strategy are to create brands and labels, improve cooperative relationships with existing partners and networks, take advantage of the abundant availability of raw materials to increase production capacity. In addition, it was also found that the lack of processing facility can be overcome, and develop diversification of through assistance from government or from other agencies. It was also found that diversisification of processed products derived from smoked Roa fish can become an important strategy. Other important findings from this study were the demands for the product were high and the industry could not keep up with the demands, the Roa smoking industry is investment-worthy, and the right strategy to develop this industry should be based on the Strength-Opportunity strategy.
ANDRIAN DJAMALU。哥伦塔洛省Boalemo区小型烟熏罗阿鱼(Hemiramphus sp.)产业发展战略。(由Sitti Nur Faridah和Muh监督。净化贾米尔)。Gorontalo省约95%的熏制罗阿鱼需求来自省外,其余5%由Boalemo摄政的罗阿鱼熏制业提供。这些情况受到小型企业缺乏生产能力、生产设施和资本的影响。本研究旨在分析小型罗非鱼烟熏产业的现状,进行财务可行性分析,制定小型罗非鱼烟熏产业的发展战略。使用的研究方法是定性和定量研究与数据收集技术的访谈,文件,和SWOT分析的形式。通过NPV、IRR、BCR、PP和BEP值,确定了道路吸烟行业的可行性。根据SWOT分析结果,实力-机会战略得分最高。支持这一发展战略的政策是创建品牌和标签,改善与现有合作伙伴和网络的合作关系,利用丰富的原材料供应来提高产能。此外,还发现可以克服缺乏加工设施的问题,并通过政府或其他机构的援助发展多样化。研究还发现,从熏制罗阿鱼衍生的加工产品的多样化可以成为一个重要的策略。本研究的其他重要发现是对产品的需求很高,行业无法跟上需求,Roa吸烟行业是值得投资的,发展该行业的正确战略应该基于优势-机会战略。
{"title":"SCALE","authors":"Takatoshi Yoshida, Xiaoyan Shen, Koichi Yoshino, Ken Nakagaki, H. Ishii","doi":"10.1145/3332165.3347935","DOIUrl":"https://doi.org/10.1145/3332165.3347935","url":null,"abstract":"ANDRIAN DJAMALU. Strategy for the Development of Small scale Smoking Roa Fish (Hemiramphus sp.) industry In Boalemo District, Gorontalo Province. (Supervised by Sitti Nur Faridah and Muh. Hatta Jamil). About 95% of the demand for smoked Roa fish in the province of Gorontalo comes from outside the province and the remaining 5% provided by the roa fish smoking industry in Boalemo Regency. These conditions are affected by the lacks of production capacity, production facilities, and capital owned by small businesses. This study aims to analyze the current conditions of the small scale roa smoking industry, conduct financial feasibility analysis, and formulate development strategy for the small scale roa fish smoking industry. The research method used was qualitative and quantitative research with data collection techniques in the form of interviews, documentation, and SWOT analysis. Through the NPV, IRR, BCR, PP, and BEP values, the fasibility of the roa smoking industry was determined. Based on the results obtained from SWOT analysis, it was found that the strength-opportunity strategy had the highest score. Policies to support this development strategy are to create brands and labels, improve cooperative relationships with existing partners and networks, take advantage of the abundant availability of raw materials to increase production capacity. In addition, it was also found that the lack of processing facility can be overcome, and develop diversification of through assistance from government or from other agencies. It was also found that diversisification of processed products derived from smoked Roa fish can become an important strategy. Other important findings from this study were the demands for the product were high and the industry could not keep up with the demands, the Roa smoking industry is investment-worthy, and the right strategy to develop this industry should be based on the Strength-Opportunity strategy.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121809531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Redirected walking (RDW) techniques provide a way to explore a virtual space that is larger than the available physical space by imperceptibly manipulating the virtual world view or motions. These manipulations may introduce conflicts between real and virtual cues (e.g., visual-vestibular conflicts), which can be disturbing when detectable by users. The empirically established detection thresholds of rotation manipulation for RDW still require a large physical tracking space and are therefore impractical for general-purpose Virtual Reality (VR) applications. We investigate Redirected Jumping (RDJ) as a new locomotion metaphor for redirection to partially address this limitation, and because jumping is a common interaction for environments like games. We investigated the detection rates for different curvature gains during RDJ. The probability of users detecting RDJ appears substantially lower than that of RDW, meaning designers can get away with greater manipulations with RDJ than with RDW. We postulate that the substantial vertical (up/down) movement present when jumping introduces increased vestibular noise compared to normal walking, thereby supporting greater rotational manipulations. Our study suggests that the potential combination of metaphors (e.g., walking and jumping) could further reduce the required physical space for locomotion in VR. We also summarize some differences in user jumping approaches and provide motion sickness measures in our study.
{"title":"Redirected Jumping: Perceptual Detection Rates for Curvature Gains","authors":"Sungchul Jung, C. Borst, S. Hoermann, R. Lindeman","doi":"10.1145/3332165.3347868","DOIUrl":"https://doi.org/10.1145/3332165.3347868","url":null,"abstract":"Redirected walking (RDW) techniques provide a way to explore a virtual space that is larger than the available physical space by imperceptibly manipulating the virtual world view or motions. These manipulations may introduce conflicts between real and virtual cues (e.g., visual-vestibular conflicts), which can be disturbing when detectable by users. The empirically established detection thresholds of rotation manipulation for RDW still require a large physical tracking space and are therefore impractical for general-purpose Virtual Reality (VR) applications. We investigate Redirected Jumping (RDJ) as a new locomotion metaphor for redirection to partially address this limitation, and because jumping is a common interaction for environments like games. We investigated the detection rates for different curvature gains during RDJ. The probability of users detecting RDJ appears substantially lower than that of RDW, meaning designers can get away with greater manipulations with RDJ than with RDW. We postulate that the substantial vertical (up/down) movement present when jumping introduces increased vestibular noise compared to normal walking, thereby supporting greater rotational manipulations. Our study suggests that the potential combination of metaphors (e.g., walking and jumping) could further reduce the required physical space for locomotion in VR. We also summarize some differences in user jumping approaches and provide motion sickness measures in our study.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122085724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}