Radhika Ghosal, Bhavika Rana, Ishan Kapur, Aman Parnami
Fabricating and actuating inflatables for shape-changing interfaces and soft robotics is challenging and time-consuming, requiring knowledge in diverse domains such as pneumatics, manufacturing processes for elastomers, and embedded systems. We propose in this poster a scheme for rapid prototyping and pneumatically actuating piecewise multi-chambered inflatables, using balloons as our building blocks. We provide a construction kit containing pneumatic control boards, pneumatic components, and balloons for constructing simple actuated balloon models. We also provide various primitives of actuation and locomotion to help the user put together their desired actuator, along with an Android app and software API for controlling it via Bluetooth. Finally, we demonstrate the construction and actuation of these inflatable structures using three sample applications.
{"title":"Rapid Prototyping of Pneumatically Actuated Inflatable Structures","authors":"Radhika Ghosal, Bhavika Rana, Ishan Kapur, Aman Parnami","doi":"10.1145/3332167.3357121","DOIUrl":"https://doi.org/10.1145/3332167.3357121","url":null,"abstract":"Fabricating and actuating inflatables for shape-changing interfaces and soft robotics is challenging and time-consuming, requiring knowledge in diverse domains such as pneumatics, manufacturing processes for elastomers, and embedded systems. We propose in this poster a scheme for rapid prototyping and pneumatically actuating piecewise multi-chambered inflatables, using balloons as our building blocks. We provide a construction kit containing pneumatic control boards, pneumatic components, and balloons for constructing simple actuated balloon models. We also provide various primitives of actuation and locomotion to help the user put together their desired actuator, along with an Android app and software API for controlling it via Bluetooth. Finally, we demonstrate the construction and actuation of these inflatable structures using three sample applications.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132301790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For decades, animation has been a popular storytelling technique. Traditional tools for creating animations are labor-intensive requiring animators to painstakingly draw frames and motion curves by hand. An alternative workflow is to equip animators with direct real-time control over digital characters via performance, which offers a more immediate and efficient way to create animation. Even when using these existing expression transfer and lip sync methods, producing convincing facial animation in real-time is a challenging task. In this position paper, I describe my past and proposed future research in developing interactive systems for perceptually-valid expression retargeting from humans to stylized characters, real-time lip sync for 2D animation, and building an expressive style aligned embodied conversational agent.
{"title":"Performance-based Expressive Character Animation","authors":"Deepali Aneja","doi":"10.1145/3332167.3356880","DOIUrl":"https://doi.org/10.1145/3332167.3356880","url":null,"abstract":"For decades, animation has been a popular storytelling technique. Traditional tools for creating animations are labor-intensive requiring animators to painstakingly draw frames and motion curves by hand. An alternative workflow is to equip animators with direct real-time control over digital characters via performance, which offers a more immediate and efficient way to create animation. Even when using these existing expression transfer and lip sync methods, producing convincing facial animation in real-time is a challenging task. In this position paper, I describe my past and proposed future research in developing interactive systems for perceptually-valid expression retargeting from humans to stylized characters, real-time lip sync for 2D animation, and building an expressive style aligned embodied conversational agent.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123799282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Massive Open Online Courses (MOOCs), learners face a lot of distractions which will cause divided attention (DA). However, it is not easy for learners to realize that they are distracted and to find out which part of the course they have missed. In this paper, we present Reminder, a system for detecting divided attention and reminding learners what they just missed on both PC and mobile devices with a camera capturing their status. To get learners' attention level, we build a regression model to predict attention score from an integrated feature vector. Meanwhile, we design an interactively updating method to make the model adaptive to a specific user. We also propose a visualization method to help learners review missed content easily. User study shows that Reminder detects learners' divided attention and assists them to review missed course contents effectively.
{"title":"What Did I Miss?","authors":"Qian Zhu, Shuai Ma","doi":"10.1145/3332167.3357113","DOIUrl":"https://doi.org/10.1145/3332167.3357113","url":null,"abstract":"In Massive Open Online Courses (MOOCs), learners face a lot of distractions which will cause divided attention (DA). However, it is not easy for learners to realize that they are distracted and to find out which part of the course they have missed. In this paper, we present Reminder, a system for detecting divided attention and reminding learners what they just missed on both PC and mobile devices with a camera capturing their status. To get learners' attention level, we build a regression model to predict attention score from an integrated feature vector. Meanwhile, we design an interactively updating method to make the model adaptive to a specific user. We also propose a visualization method to help learners review missed content easily. User study shows that Reminder detects learners' divided attention and assists them to review missed course contents effectively.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131158678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While eye tracking has been typically used for achieving intuitive user interfaces, it is not sufficient when it comes to dealing with multiple-display environments. In such environments, which have become popular recently, the point-of-gaze should be estimated on multiple screens. Therefore, we propose a cross-ratio based gaze estimation using a polarization camera for multiple screens. The point-of-gaze can be estimated on each monitor by identifying the screen reflected on the corneal surface at a polarization angle. Near-infrared light emitting diodes (NIR-LEDs) attached to the display are not required. This means that standard displays can be used with high general versatility as an advantage.
{"title":"Cross-Ratio Based Gaze Estimation for Multiple Displays using a Polarization Camera","authors":"M. Sasaki, Takashi Nagamatsu, K. Takemura","doi":"10.1145/3332167.3357095","DOIUrl":"https://doi.org/10.1145/3332167.3357095","url":null,"abstract":"While eye tracking has been typically used for achieving intuitive user interfaces, it is not sufficient when it comes to dealing with multiple-display environments. In such environments, which have become popular recently, the point-of-gaze should be estimated on multiple screens. Therefore, we propose a cross-ratio based gaze estimation using a polarization camera for multiple screens. The point-of-gaze can be estimated on each monitor by identifying the screen reflected on the corneal surface at a polarization angle. Near-infrared light emitting diodes (NIR-LEDs) attached to the display are not required. This means that standard displays can be used with high general versatility as an advantage.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124925084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In an effort to find a way to more efficiently and accurately display localized high definition tactile information to the skin we studied the vibrotactile constructive wave interference properties of silicone gel. By placing two actuators at a given distance from each other and controlling the delay when each actuator is activated in relation to the other, we can achieve a point of constructive wave interference. The time when the interference occurs along with any loses due to attenuation depends on the material the vibration travels through. The goal is to find a compliant material that can be used as a reference to human tissue when designing feedback for assistive robots.
{"title":"Gel-based Haptic Mediator for High-Definition Tactile Communication","authors":"Patrick Coe, G. Evreinov, R. Raisamo","doi":"10.1145/3332167.3357097","DOIUrl":"https://doi.org/10.1145/3332167.3357097","url":null,"abstract":"In an effort to find a way to more efficiently and accurately display localized high definition tactile information to the skin we studied the vibrotactile constructive wave interference properties of silicone gel. By placing two actuators at a given distance from each other and controlling the delay when each actuator is activated in relation to the other, we can achieve a point of constructive wave interference. The time when the interference occurs along with any loses due to attenuation depends on the material the vibration travels through. The goal is to find a compliant material that can be used as a reference to human tissue when designing feedback for assistive robots.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122126623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
John Joon Young Chung, Fuhu Xiao, Nikola Banovic, Walter S. Lasecki
Autonomous systems (e.g., long-distance driverless trucks) aim to reduce the need for people to complete tedious tasks. In many domains, automation is challenging because systems may fail to recognize or comprehend all relevant aspects of its current state. When an unknown or uncertain state is encountered in a mission-critical setting, recovery often requires human intervention or hand-off. However, human intervention is associated with decision (and communication, if remote) delays that prevent recovery in low-latency settings. Instantaneous crowdsourcing approaches that leverage predictive techniques reduce this latency by preparing human responses for possible near future states before they occur. Unfortunately, the number of possible future states can be vast and considering all of them is intractable in all but the simplest of settings. Instead, to reduce the number of states that must later be explored, we propose the approach that uses the crowd to first predict the most relevant or likely future states. We examine the latency and accuracy of crowd workers in a simple future state prediction task, and find that more than half of crowd workers were able to provide accurate answers within one second. Our results show that crowd predictions can filter out critical future states in tasks where decisions are required in less than three seconds.
{"title":"Towards Instantaneous Recovery from Autonomous System Failures via Predictive Crowdsourcing","authors":"John Joon Young Chung, Fuhu Xiao, Nikola Banovic, Walter S. Lasecki","doi":"10.1145/3332167.3357100","DOIUrl":"https://doi.org/10.1145/3332167.3357100","url":null,"abstract":"Autonomous systems (e.g., long-distance driverless trucks) aim to reduce the need for people to complete tedious tasks. In many domains, automation is challenging because systems may fail to recognize or comprehend all relevant aspects of its current state. When an unknown or uncertain state is encountered in a mission-critical setting, recovery often requires human intervention or hand-off. However, human intervention is associated with decision (and communication, if remote) delays that prevent recovery in low-latency settings. Instantaneous crowdsourcing approaches that leverage predictive techniques reduce this latency by preparing human responses for possible near future states before they occur. Unfortunately, the number of possible future states can be vast and considering all of them is intractable in all but the simplest of settings. Instead, to reduce the number of states that must later be explored, we propose the approach that uses the crowd to first predict the most relevant or likely future states. We examine the latency and accuracy of crowd workers in a simple future state prediction task, and find that more than half of crowd workers were able to provide accurate answers within one second. Our results show that crowd predictions can filter out critical future states in tasks where decisions are required in less than three seconds.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134149857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Live streaming has recently gained worldwide popularity, due to the affordable digital video devices, high-speed Internet access, and social media. While video games and other entertainment content attract a broad audience to live streaming, it has become an important channel for sharing a variety of non-entertainment content, such as civil content, knowledge sharing, and even promoting traditional cultural practices. However, little research has explored the practices and challenges of the vibrant communities of these streamers who share knowledge or showcase cultural practices through live streams, and few tools have been designed and developed to support their needs for engaging viewers and communicate with viewers more efficiently. The goal of my research is to better understand the practices of these streamers and their communities, and to design tools to better support knowledge sharing and cultural heritage preservation through live streaming.
{"title":"Improving Viewer Engagement and Communication Efficiency within Non-Entertainment Live Streaming","authors":"Zhicong Lu","doi":"10.1145/3332167.3356879","DOIUrl":"https://doi.org/10.1145/3332167.3356879","url":null,"abstract":"Live streaming has recently gained worldwide popularity, due to the affordable digital video devices, high-speed Internet access, and social media. While video games and other entertainment content attract a broad audience to live streaming, it has become an important channel for sharing a variety of non-entertainment content, such as civil content, knowledge sharing, and even promoting traditional cultural practices. However, little research has explored the practices and challenges of the vibrant communities of these streamers who share knowledge or showcase cultural practices through live streams, and few tools have been designed and developed to support their needs for engaging viewers and communicate with viewers more efficiently. The goal of my research is to better understand the practices of these streamers and their communities, and to design tools to better support knowledge sharing and cultural heritage preservation through live streaming.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130300226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel prototyping method using a metal leaf and laser printer. Proposed metal leaf circuit is capable of producing circuits on normal paper that can be printed by a laser printer. The metal leaf is adhered to paper using a toner used in a laser printer. When the metal leaf is placed on the printed circuit pattern and pressed with an iron, the metal leaf adheres to the circuit pattern. Removal of the excess metal leaf completes the metal leaf circuit diagram on the paper. In addition, by using the change of temperature of toner, we can easily cut and repair the circuit. We made a metal leaf circuit on the cloth with the masu sake cup, and evaluated it.
{"title":"Rapid Prototyping of Paper Electronics Using a Metal Leaf and Laser Printer","authors":"N. Segawa, Kunihiro Kato, Hiroyuki Manabe","doi":"10.1145/3332167.3356885","DOIUrl":"https://doi.org/10.1145/3332167.3356885","url":null,"abstract":"We introduce a novel prototyping method using a metal leaf and laser printer. Proposed metal leaf circuit is capable of producing circuits on normal paper that can be printed by a laser printer. The metal leaf is adhered to paper using a toner used in a laser printer. When the metal leaf is placed on the printed circuit pattern and pressed with an iron, the metal leaf adheres to the circuit pattern. Removal of the excess metal leaf completes the metal leaf circuit diagram on the paper. In addition, by using the change of temperature of toner, we can easily cut and repair the circuit. We made a metal leaf circuit on the cloth with the masu sake cup, and evaluated it.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127911358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Langerak, J. Zárate, Velko Vechev, Daniele Panozzo, Otmar Hilliges
We demonstrate a system to deliver dynamic guidance in drawing, sketching and handwriting tasks via an electromagnet moving underneath a high refresh rate pressure sensitive tablet presented in citelangerak2019dynamic. The system allows the user to move the pen at their own pace and style and does not take away control. Using a closed-loop time-free approach allows for error-correcting behavior. The user will experience to be smoothly and natural pulled back to the desired trajectory rather than pushing or pulling the pen to a continuously advancing setpoint. The optimization of the setpoint with regard to the user is unique in our approach.
{"title":"A Demonstration on Dynamic Drawing Guidance via Electromagnetic Haptic Feedback","authors":"T. Langerak, J. Zárate, Velko Vechev, Daniele Panozzo, Otmar Hilliges","doi":"10.1145/3332167.3356889","DOIUrl":"https://doi.org/10.1145/3332167.3356889","url":null,"abstract":"We demonstrate a system to deliver dynamic guidance in drawing, sketching and handwriting tasks via an electromagnet moving underneath a high refresh rate pressure sensitive tablet presented in citelangerak2019dynamic. The system allows the user to move the pen at their own pace and style and does not take away control. Using a closed-loop time-free approach allows for error-correcting behavior. The user will experience to be smoothly and natural pulled back to the desired trajectory rather than pushing or pulling the pen to a continuously advancing setpoint. The optimization of the setpoint with regard to the user is unique in our approach.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117122780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eisuke Fujinawa, K. Goto, Atsushi Irie, Songtao Wu, Kuanhong Xu
Conventional camera-based hand interaction technique suffered from self-occlusion among fingers, which lowers the detection accuracy of fingertip positions, leading to uncomfortable UI controls. Based on observations, self-occlusion depends on hand postures. We design an interaction framework in which interaction is decided in response to a recognized hand posture. Using a tabletop projection system that has a projector and a depth sensor, we implement the framework by integrating five touch and in-air interactions that balance its stability and usability.
{"title":"Occlusion-aware Hand Posture Based Interaction on Tabletop Projector","authors":"Eisuke Fujinawa, K. Goto, Atsushi Irie, Songtao Wu, Kuanhong Xu","doi":"10.1145/3332167.3356890","DOIUrl":"https://doi.org/10.1145/3332167.3356890","url":null,"abstract":"Conventional camera-based hand interaction technique suffered from self-occlusion among fingers, which lowers the detection accuracy of fingertip positions, leading to uncomfortable UI controls. Based on observations, self-occlusion depends on hand postures. We design an interaction framework in which interaction is decided in response to a recognized hand posture. Using a tabletop projection system that has a projector and a depth sensor, we implement the framework by integrating five touch and in-air interactions that balance its stability and usability.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115622313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}