With the increased popularity of cameras, more and more people are interested in learning photography. People are willing to invest in expensive cameras as a medium for their artistic expression, but few have access to in-person classes. Inspired by critique sessions common in in-person art practice classes, we propose design principles for creative learning. My dissertation research focuses on designing new interfaces and interactions that provide contextual in-camera feedback to aid users in learning visual elements of photography. We interactively visualize results of image processing algorithms as additional information for the user to make more informed and intentional decisions during capture. In this paper, we describe our design principles, and apply these principles in the design of two guided photography interfaces: one to explore lighting options for a portrait, and one to refine contents and composition of a photo.
{"title":"Artistic Vision: Providing Contextual Guidance for Capture-Time Decisions","authors":"J. E","doi":"10.1145/3266037.3266128","DOIUrl":"https://doi.org/10.1145/3266037.3266128","url":null,"abstract":"With the increased popularity of cameras, more and more people are interested in learning photography. People are willing to invest in expensive cameras as a medium for their artistic expression, but few have access to in-person classes. Inspired by critique sessions common in in-person art practice classes, we propose design principles for creative learning. My dissertation research focuses on designing new interfaces and interactions that provide contextual in-camera feedback to aid users in learning visual elements of photography. We interactively visualize results of image processing algorithms as additional information for the user to make more informed and intentional decisions during capture. In this paper, we describe our design principles, and apply these principles in the design of two guided photography interfaces: one to explore lighting options for a portrait, and one to refine contents and composition of a photo.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120972675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attila Kett, Giuseppe Abrami, Alexander Mehler, C. Spiekermann
We present resources2city Explorer (R2CE), a tool for representing file systems as interactive, walkable virtual cities. R2CE visualizes file systems based on concepts of spatial, 3D information processing. For this purpose, it extends the range of functions of conventional file browsers considerably. Visual elements in a city generated by R2CE represent (relations of) objects of the underlying file system. The paper describes the functional spectrum of R2CE and illustrates it by visualizing a sample of 940 files.
我们提出了resources2city Explorer (R2CE),这是一个将文件系统表示为交互式、可步行的虚拟城市的工具。R2CE可视化文件系统基于空间,三维信息处理的概念。为此,它大大扩展了传统文件浏览器的功能范围。R2CE生成的城市中的可视元素表示底层文件系统的对象(关系)。本文描述了R2CE的功能谱,并通过940个文件的可视化示例进行了说明。
{"title":"resources2city Explorer: A System for Generating Interactive Walkable Virtual Cities out of File Systems","authors":"Attila Kett, Giuseppe Abrami, Alexander Mehler, C. Spiekermann","doi":"10.1145/3266037.3266122","DOIUrl":"https://doi.org/10.1145/3266037.3266122","url":null,"abstract":"We present resources2city Explorer (R2CE), a tool for representing file systems as interactive, walkable virtual cities. R2CE visualizes file systems based on concepts of spatial, 3D information processing. For this purpose, it extends the range of functions of conventional file browsers considerably. Visual elements in a city generated by R2CE represent (relations of) objects of the underlying file system. The paper describes the functional spectrum of R2CE and illustrates it by visualizing a sample of 940 files.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115199646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feras Al Taha, Pascal E. Fortin, Antoine Weill--Duflos, J. Cooperstock
Biased perceptions of others are known to negatively influence the outcomes of social and professional interactions in many regards. Theses biases can be informed by a multitude of non-verbal cues such as voice pitch and voice volume. This project explores how haptic effects, generated from speech, could attenuate listeners' perceived voice-related biases formed from a speaker's voice pitch. Promising preliminary results collected during a decision-making task suggest that the speech to haptic mapping and vibration delivery mechanism employed does attenuate voice-related biases. Accordingly, it is anticipated that such a system could be introduced in the workplace to equalize people's contribution opportunities and to create a more inclusive environment by reversing voice-related biases.
{"title":"Reversing Voice-Related Biases Through Haptic Reinforcement","authors":"Feras Al Taha, Pascal E. Fortin, Antoine Weill--Duflos, J. Cooperstock","doi":"10.1145/3266037.3266101","DOIUrl":"https://doi.org/10.1145/3266037.3266101","url":null,"abstract":"Biased perceptions of others are known to negatively influence the outcomes of social and professional interactions in many regards. Theses biases can be informed by a multitude of non-verbal cues such as voice pitch and voice volume. This project explores how haptic effects, generated from speech, could attenuate listeners' perceived voice-related biases formed from a speaker's voice pitch. Promising preliminary results collected during a decision-making task suggest that the speech to haptic mapping and vibration delivery mechanism employed does attenuate voice-related biases. Accordingly, it is anticipated that such a system could be introduced in the workplace to equalize people's contribution opportunities and to create a more inclusive environment by reversing voice-related biases.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"120-121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131709390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomoya Sasaki, M. Y. Saraiji, K. Minamizawa, M. Inami
We introduce MetaArms, wearable anthropomorphic robotic arms and hands with six degrees of freedom operated by the user's legs and feet. Our overall research goal is to re-imagine what our bodies can do with the aid of wearable robotics using a body-remapping approach. To this end, we present an initial exploratory case study. MetaArms' two robotic arms are controlled by the user's feet motion, and the robotic hands can grip objects according to the user's toes bending. Haptic feedback is also presented on the user's feet that correlate with the touched objects on the robotic hands, creating a closed-loop system. Using this system, users can experience an expanded number of arms interaction in which there legs are mapped into the artificial limbs. MetaArms provided initial indications for the sense of limbs alteration.
{"title":"MetaArms: Body Remapping Using Feet-Controlled Artificial Arms","authors":"Tomoya Sasaki, M. Y. Saraiji, K. Minamizawa, M. Inami","doi":"10.1145/3266037.3271628","DOIUrl":"https://doi.org/10.1145/3266037.3271628","url":null,"abstract":"We introduce MetaArms, wearable anthropomorphic robotic arms and hands with six degrees of freedom operated by the user's legs and feet. Our overall research goal is to re-imagine what our bodies can do with the aid of wearable robotics using a body-remapping approach. To this end, we present an initial exploratory case study. MetaArms' two robotic arms are controlled by the user's feet motion, and the robotic hands can grip objects according to the user's toes bending. Haptic feedback is also presented on the user's feet that correlate with the touched objects on the robotic hands, creating a closed-loop system. Using this system, users can experience an expanded number of arms interaction in which there legs are mapped into the artificial limbs. MetaArms provided initial indications for the sense of limbs alteration.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"416 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114120949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anke van Oosterhout, Majken Kirkegaard Rasmussen, Eve E. Hoggan, M. B. Alonso
We present six rotary knobs, each with a distinct shape, that provide haptic force feedback on rotation. The knob shapes were evaluated in relation to twelve haptic feedback stimuli. The stimuli were designed as a combination of the most relevant perceptual parameters of force feedback; acceleration, friction, detent amplitude and spacing. The results indicate that there is a relationship between the shape of a knob and its haptic feedback. The perceived functionality can be dynamically altered by changing its shape and haptic feedback. This work serves as basis for the design of dynamic interface controls that can adapt their shape and haptic feel to the content that is controlled. In our demonstration, we show the six distinct knobs shapes with the different haptic feedback stimuli. Attendees can experience the interaction with the different knob shapes in relation the stimuli and design stimuli with a graphical editor.
{"title":"Knobology 2.0: Giving Shape to the Haptic Force Feedback of Interactive Knobs","authors":"Anke van Oosterhout, Majken Kirkegaard Rasmussen, Eve E. Hoggan, M. B. Alonso","doi":"10.1145/3266037.3271649","DOIUrl":"https://doi.org/10.1145/3266037.3271649","url":null,"abstract":"We present six rotary knobs, each with a distinct shape, that provide haptic force feedback on rotation. The knob shapes were evaluated in relation to twelve haptic feedback stimuli. The stimuli were designed as a combination of the most relevant perceptual parameters of force feedback; acceleration, friction, detent amplitude and spacing. The results indicate that there is a relationship between the shape of a knob and its haptic feedback. The perceived functionality can be dynamically altered by changing its shape and haptic feedback. This work serves as basis for the design of dynamic interface controls that can adapt their shape and haptic feel to the content that is controlled. In our demonstration, we show the six distinct knobs shapes with the different haptic feedback stimuli. Attendees can experience the interaction with the different knob shapes in relation the stimuli and design stimuli with a graphical editor.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116975851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Locomotion,the most basic interaction in Virtual Environments (VE), enables users to move around the virtual world. Locomotion in Virtual Reality (VR) is a problem which has not been solved completely since existing techniques have a specific set of requirements and limitations. In addition, the uncertainty about the impact that virtual cues have on users perception complicates the development of better locomotion interfaces. A broadly applicable locomotion technique that is easy to use and addresses the issues of presence, cybersickness and fatigue has yet to be developed. Though optical flow and vestibular cues are dominant in navigation, other cues such as auditory, arm feedback, wind, etc. play a role. The proposed research aims to evaluate and improve upon a set of locomotion techniques for different modes of locomotion in virtual scenarios, as well as the transitions between them. The outcome measures of the evaluations of the different scenarios are usefulness for spatial orientation, presence, fatigue, cybersickness and user preference. The envisioned contribution of my thesis is research towards the design of a locomotion technique that is easy to use and addresses the shortcomings of current implementations.
{"title":"Comfortable and Efficient Travel Techniques in VR","authors":"Bhuvaneswari Sarupuri","doi":"10.1145/3266037.3266126","DOIUrl":"https://doi.org/10.1145/3266037.3266126","url":null,"abstract":"Locomotion,the most basic interaction in Virtual Environments (VE), enables users to move around the virtual world. Locomotion in Virtual Reality (VR) is a problem which has not been solved completely since existing techniques have a specific set of requirements and limitations. In addition, the uncertainty about the impact that virtual cues have on users perception complicates the development of better locomotion interfaces. A broadly applicable locomotion technique that is easy to use and addresses the issues of presence, cybersickness and fatigue has yet to be developed. Though optical flow and vestibular cues are dominant in navigation, other cues such as auditory, arm feedback, wind, etc. play a role. The proposed research aims to evaluate and improve upon a set of locomotion techniques for different modes of locomotion in virtual scenarios, as well as the transitions between them. The outcome measures of the evaluations of the different scenarios are usefulness for spatial orientation, presence, fatigue, cybersickness and user preference. The envisioned contribution of my thesis is research towards the design of a locomotion technique that is easy to use and addresses the shortcomings of current implementations.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124934464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose to equip smartphone-based HMDs (SbHMDs) with an additional touch pad. SbHMDs are a low cost approach to allowing users to experience virtual reality (VR). Current SbHMDs, however, provide poor input functionality and sometimes external devices are necessary to enhance the VR experience. Our proposal uses frustrated total internal reflection (FTIR) to realize a touch pad on the external surfaces of the HMD case; no special devices are needed. As simple FTIR approaches do not suit SbHMDs due to the spatial relation between camera and light, we design an arrangement of acrylic plates and mirror suitable for smartphone's built-in camera and torch-light. It extends the input vocabulary SbHMDs to include touch location, gestures, and also pressure.
{"title":"FTIR-based Touch Pad for Smartphone-based HMD Enhancement","authors":"Takuya Kitade, Wataru Yamada, H. Manabe","doi":"10.1145/3266037.3271641","DOIUrl":"https://doi.org/10.1145/3266037.3271641","url":null,"abstract":"We propose to equip smartphone-based HMDs (SbHMDs) with an additional touch pad. SbHMDs are a low cost approach to allowing users to experience virtual reality (VR). Current SbHMDs, however, provide poor input functionality and sometimes external devices are necessary to enhance the VR experience. Our proposal uses frustrated total internal reflection (FTIR) to realize a touch pad on the external surfaces of the HMD case; no special devices are needed. As simple FTIR approaches do not suit SbHMDs due to the spatial relation between camera and light, we design an arrangement of acrylic plates and mirror suitable for smartphone's built-in camera and torch-light. It extends the input vocabulary SbHMDs to include touch location, gestures, and also pressure.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"9 49","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120927850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality (VR) using a head-mounted display (HMD) have been rapidly becoming popular. Lots of HMD products and various VR applications such as games, training tools and communication services have been released in recent years. However, there is a well-known problem that the user's face is covered by the HMD preventing the facial expression from being captured. This strongly restricts VR applications. For example, users wearing HMDs normally cannot exchange their face images. This degrades communication quality in virtual spaces because facial expressions are an important element of human communication.
{"title":"Transparent Mask: Face-Capturing Head-Mounted Display with IR Pass Filters","authors":"Mariko Chiba, Wataru Yamada, H. Manabe","doi":"10.1145/3266037.3271632","DOIUrl":"https://doi.org/10.1145/3266037.3271632","url":null,"abstract":"Virtual reality (VR) using a head-mounted display (HMD) have been rapidly becoming popular. Lots of HMD products and various VR applications such as games, training tools and communication services have been released in recent years. However, there is a well-known problem that the user's face is covered by the HMD preventing the facial expression from being captured. This strongly restricts VR applications. For example, users wearing HMDs normally cannot exchange their face images. This degrades communication quality in virtual spaces because facial expressions are an important element of human communication.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121356755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naoshi Ooba, K. Aoyama, Hiromi Nakamura, Homei Miyashita
Herein, we propose "unlimited electric gum," an electric taste device that will enable users to perceive taste for as long the user is chewing the gum. We developed an in-mouth type novel electric taste-imparting apparatus using a piezoelectric element so that the piezoelectric effect is stimulated by chewing. This enabled the design of a device that does not require cables around a user's lips or batteries in their mouth. In this paper, we introduce this device and report our experimental and exhibition results.
{"title":"Unlimited Electric Gum: A Piezo-based Electric Taste Apparatus Activated by Chewing","authors":"Naoshi Ooba, K. Aoyama, Hiromi Nakamura, Homei Miyashita","doi":"10.1145/3266037.3271635","DOIUrl":"https://doi.org/10.1145/3266037.3271635","url":null,"abstract":"Herein, we propose \"unlimited electric gum,\" an electric taste device that will enable users to perceive taste for as long the user is chewing the gum. We developed an in-mouth type novel electric taste-imparting apparatus using a piezoelectric element so that the piezoelectric effect is stimulated by chewing. This enabled the design of a device that does not require cables around a user's lips or batteries in their mouth. In this paper, we introduce this device and report our experimental and exhibition results.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116219675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smartphone user authentication is still an open challenge because the balance between both security and usability is indispensable. To balance between them, active authentication is one way to overcome the problem. In this paper, we tackle to improve the accuracy of active authentication by adopting online learning with touch pressure. In recent years, it becomes easy to use the smartphones equipped with pressure sensor so that we have confirmed the effectiveness of adopting the touch pressure as one of the features to authenticate. Our experiments adopting online AROW algorithm with touch pressure show that equal error rate (EER), where the miss rate and false rate are equal, is reduced up to one-fifth by adding touch pressure feature. Moreover, we have confirmed that training with the data from both sitting posture and prone posture archives the best when testing variety of postures including sitting, standing and prone, which achieves EER up to 0.14%.
{"title":"Active Authentication on Smartphone using Touch Pressure","authors":"Masashi Kudo, H. Yamana","doi":"10.1145/3266037.3266113","DOIUrl":"https://doi.org/10.1145/3266037.3266113","url":null,"abstract":"Smartphone user authentication is still an open challenge because the balance between both security and usability is indispensable. To balance between them, active authentication is one way to overcome the problem. In this paper, we tackle to improve the accuracy of active authentication by adopting online learning with touch pressure. In recent years, it becomes easy to use the smartphones equipped with pressure sensor so that we have confirmed the effectiveness of adopting the touch pressure as one of the features to authenticate. Our experiments adopting online AROW algorithm with touch pressure show that equal error rate (EER), where the miss rate and false rate are equal, is reduced up to one-fifth by adding touch pressure feature. Moreover, we have confirmed that training with the data from both sitting posture and prone posture archives the best when testing variety of postures including sitting, standing and prone, which achieves EER up to 0.14%.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121654429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}