Nobel laureate, Herb Simon described design as changing existing situations into preferred situations. But Simon did not specify a medium by which we must design. As a technologist tackling challenges, I often over-rely on technology as my medium and limit my potential for impact. We can design technology, but we can also design services, processes, and organizations and much more. By more broadly understanding the media by which we can design, we can more systematically build the future in which we want to live.
{"title":"How We Make Impact","authors":"E. Gerber","doi":"10.1145/3332165.3348235","DOIUrl":"https://doi.org/10.1145/3332165.3348235","url":null,"abstract":"Nobel laureate, Herb Simon described design as changing existing situations into preferred situations. But Simon did not specify a medium by which we must design. As a technologist tackling challenges, I often over-rely on technology as my medium and limit my potential for impact. We can design technology, but we can also design services, processes, and organizations and much more. By more broadly understanding the media by which we can design, we can more systematically build the future in which we want to live.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132733016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) has the potential to expand our capability for interacting with and comprehending our surrounding environment. However, current AR devices treat electronic appliances no different than common non-interactive objects, which substantially limits the functionality of AR. We present InfoLED, a positioning and communication system based on indicator lights that enables appliances to transmit their location, device IDs, and status information to the AR client without changing their visual design. By leveraging human insensitivity to high-frequency brightness flickering, InfoLED transmits all of that information without disturbing the original function as an indicator light. We envision InfoLED being used in three categories of application: malfunctioning device diagnosis, appliances control, and multi-appliance configuration. We conducted three user studies, measuring the performance of the InfoLED system, the human readability of the patterns and colors displayed on the InfoLED, and users' overall preference for InfoLED. The study results showed that InfoLED can work properly from a distance of up to 7 meters in indoor conditions and it did not interfere with our participants' ability to comprehend the high-level patterns and colors of the indicator light. Overall, study subjects prefer InfoLED to an ArUco 2D barcode-based baseline system and reported less cognitive load when using our system.
{"title":"InfoLED: Augmenting LED Indicator Lights for Device Positioning and Communication","authors":"Jackie Yang, J. Landay","doi":"10.1145/3332165.3347954","DOIUrl":"https://doi.org/10.1145/3332165.3347954","url":null,"abstract":"Augmented Reality (AR) has the potential to expand our capability for interacting with and comprehending our surrounding environment. However, current AR devices treat electronic appliances no different than common non-interactive objects, which substantially limits the functionality of AR. We present InfoLED, a positioning and communication system based on indicator lights that enables appliances to transmit their location, device IDs, and status information to the AR client without changing their visual design. By leveraging human insensitivity to high-frequency brightness flickering, InfoLED transmits all of that information without disturbing the original function as an indicator light. We envision InfoLED being used in three categories of application: malfunctioning device diagnosis, appliances control, and multi-appliance configuration. We conducted three user studies, measuring the performance of the InfoLED system, the human readability of the patterns and colors displayed on the InfoLED, and users' overall preference for InfoLED. The study results showed that InfoLED can work properly from a distance of up to 7 meters in indoor conditions and it did not interfere with our participants' ability to comprehend the high-level patterns and colors of the indicator light. Overall, study subjects prefer InfoLED to an ArUco 2D barcode-based baseline system and reported less cognitive load when using our system.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133033186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programmers, researchers, system administrators, and data scientists often build complex workflows based on command-line applications. To give these power users the well-known benefits of GUIs, we created Bespoke, a system that synthesizes custom GUIs by observing user demonstrations of command-line apps. Bespoke unifies the two main forms of desktop human-computer interaction (command-line and GUI) via a hybrid approach that combines the flexibility and composability of the command line with the usability and discoverability of GUIs. To assess the versatility of Bespoke, we ran an open-ended study where participants used it to create their own GUIs in domains that personally motivated them. They made a diverse set of GUIs for use cases such as cloud computing management, machine learning prototyping, lecture video transcription, integrated circuit design, remote code deployment, and gaming server management. Participants reported that the benefit of these bespoke GUIs was that they exposed only the most relevant subset of options required for their specific needs. In contrast, vendor-made GUIs usually include far more panes, menus, and settings since they must accommodate a wider range of use cases.
{"title":"Bespoke: Interactively Synthesizing Custom GUIs from Command-Line Applications By Demonstration","authors":"Priyan Vaithilingam, Philip J. Guo","doi":"10.1145/3332165.3347944","DOIUrl":"https://doi.org/10.1145/3332165.3347944","url":null,"abstract":"Programmers, researchers, system administrators, and data scientists often build complex workflows based on command-line applications. To give these power users the well-known benefits of GUIs, we created Bespoke, a system that synthesizes custom GUIs by observing user demonstrations of command-line apps. Bespoke unifies the two main forms of desktop human-computer interaction (command-line and GUI) via a hybrid approach that combines the flexibility and composability of the command line with the usability and discoverability of GUIs. To assess the versatility of Bespoke, we ran an open-ended study where participants used it to create their own GUIs in domains that personally motivated them. They made a diverse set of GUIs for use cases such as cloud computing management, machine learning prototyping, lecture video transcription, integrated circuit design, remote code deployment, and gaming server management. Participants reported that the benefit of these bespoke GUIs was that they exposed only the most relevant subset of options required for their specific needs. In contrast, vendor-made GUIs usually include far more panes, menus, and settings since they must accommodate a wider range of use cases.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131895418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhang Zhao, Elizabeth Kupferstein, Brenda Veronica Castro, Steven K. Feiner, Shiri Azenkot
Navigating stairs is a dangerous mobility challenge for people with low vision, who have a visual impairment that falls short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but people with low vision have usable vision and don't typically use nonvisual aids. We conducted the first exploration of augmented reality (AR) visualizations to facilitate stair navigation for people with low vision. We designed visualizations for a projection-based AR platform and smartglasses, considering the different characteristics of these platforms. For projection-based AR, we designed visual highlights that are projected directly on the stairs. In contrast, for smartglasses that have a limited vertical field of view, we designed visualizations that indicate the user's position on the stairs, without directly augmenting the stairs themselves. We evaluated our visualizations on each platform with 12 people with low vision, finding that the visualizations for projection-based AR increased participants' walking speed. Our designs on both platforms largely increased participants' self-reported psychological security.
{"title":"Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision","authors":"Yuhang Zhao, Elizabeth Kupferstein, Brenda Veronica Castro, Steven K. Feiner, Shiri Azenkot","doi":"10.1145/3332165.3347906","DOIUrl":"https://doi.org/10.1145/3332165.3347906","url":null,"abstract":"Navigating stairs is a dangerous mobility challenge for people with low vision, who have a visual impairment that falls short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but people with low vision have usable vision and don't typically use nonvisual aids. We conducted the first exploration of augmented reality (AR) visualizations to facilitate stair navigation for people with low vision. We designed visualizations for a projection-based AR platform and smartglasses, considering the different characteristics of these platforms. For projection-based AR, we designed visual highlights that are projected directly on the stairs. In contrast, for smartglasses that have a limited vertical field of view, we designed visualizations that indicate the user's position on the stairs, without directly augmenting the stairs themselves. We evaluated our visualizations on each platform with 12 people with low vision, finding that the visualizations for projection-based AR increased participants' walking speed. Our designs on both platforms largely increased participants' self-reported psychological security.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117301853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 3B: Accessibility","authors":"X. Chen","doi":"10.1145/3368374","DOIUrl":"https://doi.org/10.1145/3368374","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133720892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 7B: Haptics","authors":"Karon E Maclean","doi":"10.1145/3368382","DOIUrl":"https://doi.org/10.1145/3368382","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114307318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Yoon, Siyuan Ma, Wee Sun Lee, Shantanu Thakurdesai, Di Sun, Flavio P. Ribeiro, J. Holbery
We present HapSense, a single-volume soft haptic I/O device with uninterrupted dual functionalities of force sensing and vibrotactile actuation. To achieve both input and output functionalities, we employ a ferroelectric electroactive polymer as core functional material with a multilayer structure design. We introduce a haptic I/O hardware that supports tunable high driving voltage waveform for vibrotactile actuation while insitu sensing a change in capacitance from contact force. With mechanically soft nature of fabricated structure, HapSense can be embedded onto various object surfaces including but not limited to furniture, garments, and the human body. Through a series of experiments and evaluations, we characterized physical properties of HapSense and validated the feasibility of using soft haptic I/O with real users. We demonstrated a variety of interaction scenarios using HapSense.
{"title":"HapSense","authors":"S. Yoon, Siyuan Ma, Wee Sun Lee, Shantanu Thakurdesai, Di Sun, Flavio P. Ribeiro, J. Holbery","doi":"10.1145/3332165.3347888","DOIUrl":"https://doi.org/10.1145/3332165.3347888","url":null,"abstract":"We present HapSense, a single-volume soft haptic I/O device with uninterrupted dual functionalities of force sensing and vibrotactile actuation. To achieve both input and output functionalities, we employ a ferroelectric electroactive polymer as core functional material with a multilayer structure design. We introduce a haptic I/O hardware that supports tunable high driving voltage waveform for vibrotactile actuation while insitu sensing a change in capacitance from contact force. With mechanically soft nature of fabricated structure, HapSense can be embedded onto various object surfaces including but not limited to furniture, garments, and the human body. Through a series of experiments and evaluations, we characterized physical properties of HapSense and validated the feasibility of using soft haptic I/O with real users. We demonstrated a variety of interaction scenarios using HapSense.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121407074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose integrating an array of skin stretch modules with an head-mounted display (HMD) to provide two-dimensional skin stretch feedback on the user's face. Skin stretch has been found effective to induce the perception of force (e.g. weight or inertia) and to enable directional haptic cues. However, its potential as an HMD output for virtual reality (VR) remains to be exploited. Our explorative study firstly investigated the design of shear tactors. Based on our results, Masque has been implemented as an HMD prototype actuating six shear tactors positioned on the HMD's face interface. A comfort study was conducted to ensure that skin stretches generated by Masque are acceptable to all participants. The following two perception-based studies examined the minimum changes in skin stretch distance and stretch angles that are detectable by participants. The results help us to design haptic profiles as well as our prototype applications. Finally, the user evaluation indicates that participants welcomed Masque and regarded skin stretch feedback as a worthwhile addition to HMD output.
{"title":"Masque","authors":"Chi Wang, Da-Yuan Huang, Shuo-wen Hsu, Chu-En Hou, Yeu-Luen Chiu, Ruei-Che Chang, Jo-Yu Lo, Bing-Yu Chen","doi":"10.1145/3332165.3347898","DOIUrl":"https://doi.org/10.1145/3332165.3347898","url":null,"abstract":"We propose integrating an array of skin stretch modules with an head-mounted display (HMD) to provide two-dimensional skin stretch feedback on the user's face. Skin stretch has been found effective to induce the perception of force (e.g. weight or inertia) and to enable directional haptic cues. However, its potential as an HMD output for virtual reality (VR) remains to be exploited. Our explorative study firstly investigated the design of shear tactors. Based on our results, Masque has been implemented as an HMD prototype actuating six shear tactors positioned on the HMD's face interface. A comfort study was conducted to ensure that skin stretches generated by Masque are acceptable to all participants. The following two perception-based studies examined the minimum changes in skin stretch distance and stretch angles that are detectable by participants. The results help us to design haptic profiles as well as our prototype applications. Finally, the user evaluation indicates that participants welcomed Masque and regarded skin stretch feedback as a worthwhile addition to HMD output.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122351331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Roumen, Jotaro Shigeyama, J. Rudolph, Felix Grzelka, Patrick Baudisch
Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forc-ing them into slightly smaller openings. Such "press fit" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a ma-chine that removes more or less material ("kerf"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call canti¬lever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.
{"title":"SpringFit","authors":"T. Roumen, Jotaro Shigeyama, J. Rudolph, Felix Grzelka, Patrick Baudisch","doi":"10.1145/3332165.3347930","DOIUrl":"https://doi.org/10.1145/3332165.3347930","url":null,"abstract":"Joints are crucial to laser cutting as they allow making three-dimensional objects; mounts are crucial because they allow embedding technical components, such as motors. Unfortunately, mounts and joints tend to fail when trying to fabricate a model on a different laser cutter or from a different material. The reason for this lies in the way mounts and joints hold objects in place, which is by forc-ing them into slightly smaller openings. Such \"press fit\" mechanisms unfortunately are susceptible to the small changes in diameter that occur when switching to a ma-chine that removes more or less material (\"kerf\"), as well as to changes in stiffness, as they occur when switching to a different material. We present a software tool called springFit that resolves this problem by replacing the problematic press fit-based mounts and joints with what we call canti¬lever-based mounts and joints. A cantilever spring is simply a long thin piece of material that pushes against the object to be held. Unlike press fits, cantilever springs are robust against variations in kerf and material; they can even handle very high variations, simply by using longer springs. SpringFit converts models in the form of 2D cutting plans by replacing all contained mounts, notch joints, finger joints, and t-joints. In our technical evaluation, we used springFit to convert 14 models downloaded from the web.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122923383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 9A: Walking, Jumping, Roaming","authors":"Sean Follmer","doi":"10.1145/3368385","DOIUrl":"https://doi.org/10.1145/3368385","url":null,"abstract":"","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"10 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132508504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}