Sean Follmer, Daniel Leithinger, A. Olwal, Akimitsu Hogge, H. Ishii
Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.
{"title":"inFORM: dynamic physical affordances and constraints through shape and object actuation","authors":"Sean Follmer, Daniel Leithinger, A. Olwal, Akimitsu Hogge, H. Ishii","doi":"10.1145/2501988.2502032","DOIUrl":"https://doi.org/10.1145/2501988.2502032","url":null,"abstract":"Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122716434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Spelmezan, Caroline Appert, O. Chapuis, Emmanuel Pietriga
The Power-up Button is a physical button that combines pressure and proximity sensing to enable gestural interaction with one thumb. Combined with a gesture recognizer that takes the hand's anatomy into account, the Power-up Button can recognize six different mid-air gestures performed on the side of a mobile device. This gives it, for instance, enough expressive power to provide full one-handed control of interface widgets displayed on screen. This technology can complement touch input, and can be particularly useful when interacting eyes-free. It also opens up a larger design space for widget organization on screen: the button enables a more compact layout of interface components than what touch input alone would allow. This can be useful when, e.g., filling the numerous fields of a long Web form, or for very small devices.
{"title":"Controlling widgets with one power-up button","authors":"Daniel Spelmezan, Caroline Appert, O. Chapuis, Emmanuel Pietriga","doi":"10.1145/2501988.2502025","DOIUrl":"https://doi.org/10.1145/2501988.2502025","url":null,"abstract":"The Power-up Button is a physical button that combines pressure and proximity sensing to enable gestural interaction with one thumb. Combined with a gesture recognizer that takes the hand's anatomy into account, the Power-up Button can recognize six different mid-air gestures performed on the side of a mobile device. This gives it, for instance, enough expressive power to provide full one-handed control of interface widgets displayed on screen. This technology can complement touch input, and can be particularly useful when interacting eyes-free. It also opens up a larger design space for widget organization on screen: the button enables a more compact layout of interface components than what touch input alone would allow. This can be useful when, e.g., filling the numerous fields of a long Web form, or for very small devices.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116531933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Bailly, Antti Oulasvirta, Timo Kötzing, Sabrina Hoppe
Menu systems are challenging to design because design spaces are immense, and several human factors affect user behavior. This paper contributes to the design of menus with the goal of interactively assisting designers with an optimizer in the loop. To reach this goal, 1) we extend a predictive model of user performance to account for expectations as to item groupings; 2) we adapt an ant colony optimizer that has been proven efficient for this class of problems; and 3) we present MenuOptimizer, a set of inter-actions integrated into a real interface design tool (QtDesigner). MenuOptimizer supports designers' abilities to cope with uncertainty and recognize good solutions. It allows designers to delegate combinatorial problems to the optimizer, which should solve them quickly enough without disrupting the design process. We show evidence that satisfactory menu designs can be produced for complex problems in minutes.
{"title":"MenuOptimizer: interactive optimization of menu systems","authors":"G. Bailly, Antti Oulasvirta, Timo Kötzing, Sabrina Hoppe","doi":"10.1145/2501988.2502024","DOIUrl":"https://doi.org/10.1145/2501988.2502024","url":null,"abstract":"Menu systems are challenging to design because design spaces are immense, and several human factors affect user behavior. This paper contributes to the design of menus with the goal of interactively assisting designers with an optimizer in the loop. To reach this goal, 1) we extend a predictive model of user performance to account for expectations as to item groupings; 2) we adapt an ant colony optimizer that has been proven efficient for this class of problems; and 3) we present MenuOptimizer, a set of inter-actions integrated into a real interface design tool (QtDesigner). MenuOptimizer supports designers' abilities to cope with uncertainty and recognize good solutions. It allows designers to delegate combinatorial problems to the optimizer, which should solve them quickly enough without disrupting the design process. We show evidence that satisfactory menu designs can be produced for complex problems in minutes.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115582197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian Burg, Richard Bailey, Amy J. Ko, Michael D. Ernst
During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.
{"title":"Interactive record/replay for web application debugging","authors":"Brian Burg, Richard Bailey, Amy J. Ko, Michael D. Ernst","doi":"10.1145/2501988.2502050","DOIUrl":"https://doi.org/10.1145/2501988.2502050","url":null,"abstract":"During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the program's execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117270456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xing-Dong Yang, Khalad Hasan, Neil D. B. Bruce, Pourang Irani
Mobile devices are endowed with significant sensing capabilities. However, their ability to 'see' their surroundings, during active use, is limited. We present Surround-See, a self-contained smartphone equipped with an omni-directional camera that enables peripheral vision around the device to augment daily mobile tasks. Surround-See provides mobile devices with a field-of-view collinear to the device screen. This capability facilitates novel mobile tasks such as, pointing at objects in the environment to interact with content, operating the mobile device at a physical distance and allowing the device to detect user activity, even when the user is not holding it. We describe Surround-See's architecture, and demonstrate applications that exploit peripheral 'seeing' capabilities during active use of a mobile device. Users confirm the value of embedding peripheral vision capabilities on mobile devices and offer insights for novel usage methods.
{"title":"Surround-see: enabling peripheral vision on smartphones during active use","authors":"Xing-Dong Yang, Khalad Hasan, Neil D. B. Bruce, Pourang Irani","doi":"10.1145/2501988.2502049","DOIUrl":"https://doi.org/10.1145/2501988.2502049","url":null,"abstract":"Mobile devices are endowed with significant sensing capabilities. However, their ability to 'see' their surroundings, during active use, is limited. We present Surround-See, a self-contained smartphone equipped with an omni-directional camera that enables peripheral vision around the device to augment daily mobile tasks. Surround-See provides mobile devices with a field-of-view collinear to the device screen. This capability facilitates novel mobile tasks such as, pointing at objects in the environment to interact with content, operating the mobile device at a physical distance and allowing the device to detect user activity, even when the user is not holding it. We describe Surround-See's architecture, and demonstrate applications that exploit peripheral 'seeing' capabilities during active use of a mobile device. Users confirm the value of embedding peripheral vision capabilities on mobile devices and offer insights for novel usage methods.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"86 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Baudisch, Henning Pohl, S. Reinicke, Emilia Wittmers, Patrick Lühne, Marius Knaust, Sven Köhler, Patrick Schmidt, Christian Holz
We present imaginary reality games, i.e., games that mimic the respective real world sport, such as basketball or soccer, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feed-back, e.g., when a person is receiving the ball. Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball. To allow players to successfully interact with the invisible ball, we have created a physics engine that evaluates all plausible ball trajectories in parallel, allowing the game engine to select the trajectory that leads to the most enjoyable game play while still favoring skillful play.
{"title":"Imaginary reality gaming: ball games without a ball","authors":"Patrick Baudisch, Henning Pohl, S. Reinicke, Emilia Wittmers, Patrick Lühne, Marius Knaust, Sven Köhler, Patrick Schmidt, Christian Holz","doi":"10.1145/2501988.2502012","DOIUrl":"https://doi.org/10.1145/2501988.2502012","url":null,"abstract":"We present imaginary reality games, i.e., games that mimic the respective real world sport, such as basketball or soccer, except that there is no visible ball. The ball is virtual and players learn about its position only from watching each other act and a small amount of occasional auditory feed-back, e.g., when a person is receiving the ball. Imaginary reality games maintain many of the properties of physical sports, such as unencumbered play, physical exertion, and immediate social interaction between players. At the same time, they allow introducing game elements from video games, such as power-ups, non-realistic physics, and player balancing. Most importantly, they create a new game dynamic around the notion of the invisible ball. To allow players to successfully interact with the invisible ball, we have created a physics engine that evaluates all plausible ball trajectories in parallel, allowing the game engine to select the trajectory that leads to the most enjoyable game play while still favoring skillful play.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129971275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Saakes, Thomas Cambazard, J. Mitani, T. Igarashi
The availability of low-cost digital fabrication devices enables new groups of users to participate in the design and fabrication of things. However, software to assist in the transition from design to actual fabrication is currently overlooked. In this paper, we introduce PacCAM, a system for packing 2D parts within a given source material for fabrication using 2D cutting machines. Our solution combines computer vision to capture the source material shape with a user interface that incorporates 2D rigid body simulation and snapping. A user study demonstrated that participants could make layouts faster with our system compared with using traditional drafting tools. PacCAM caters to a variety of 2D fabrication applications and can contribute to the reduction of material waste.
{"title":"PacCAM: material capture and interactive 2D packing for efficient material usage on CNC cutting machines","authors":"D. Saakes, Thomas Cambazard, J. Mitani, T. Igarashi","doi":"10.1145/2501988.2501990","DOIUrl":"https://doi.org/10.1145/2501988.2501990","url":null,"abstract":"The availability of low-cost digital fabrication devices enables new groups of users to participate in the design and fabrication of things. However, software to assist in the transition from design to actual fabrication is currently overlooked. In this paper, we introduce PacCAM, a system for packing 2D parts within a given source material for fabrication using 2D cutting machines. Our solution combines computer vision to capture the source material shape with a user interface that incorporates 2D rigid body simulation and snapping. A user study demonstrated that participants could make layouts faster with our system compared with using traditional drafting tools. PacCAM caters to a variety of 2D fabrication applications and can contribute to the reduction of material waste.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126977061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lining Yao, Ryuma Niiyama, Jifei Ou, Sean Follmer, Clark Della Silva, H. Ishii
This paper presents PneUI, an enabling technology to build shape-changing interfaces through pneumatically-actuated soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. This is enabled by the composites' multi-layer structures with different mechanical or electrical properties. The shape changing states are computationally controllable through pneumatics and pre-defined structure. We explore the design space of PneUI through four applications: height changing tangible phicons, a shape changing mobile, a transformable tablet case and a shape shifting lamp.
{"title":"PneUI: pneumatically actuated soft composite materials for shape changing interfaces","authors":"Lining Yao, Ryuma Niiyama, Jifei Ou, Sean Follmer, Clark Della Silva, H. Ishii","doi":"10.1145/2501988.2502037","DOIUrl":"https://doi.org/10.1145/2501988.2502037","url":null,"abstract":"This paper presents PneUI, an enabling technology to build shape-changing interfaces through pneumatically-actuated soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. This is enabled by the composites' multi-layer structures with different mechanical or electrical properties. The shape changing states are computationally controllable through pneumatics and pre-defined structure. We explore the design space of PneUI through four applications: height changing tangible phicons, a shape changing mobile, a transformable tablet case and a shape shifting lamp.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122393146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei-Yu Chi, Joyce Liu, Jason Linder, Mira Dontcheva, Wilmot Li, Bjoern Hartmann
Amateur instructional videos often show a single uninterrupted take of a recorded demonstration without any edits. While easy to produce, such videos are often too long as they include unnecessary or repetitive actions as well as mistakes. We introduce DemoCut, a semi-automatic video editing system that improves the quality of amateur instructional videos for physical tasks. DemoCut asks users to mark key moments in a recorded demonstration using a set of marker types derived from our formative study. Based on these markers, the system uses audio and video analysis to automatically organize the video into meaningful segments and apply appropriate video editing effects. To understand the effectiveness of DemoCut, we report a technical evaluation of seven video tutorials created with DemoCut. In a separate user evaluation, all eight participants successfully created a complete tutorial with a variety of video editing effects using our system.
{"title":"DemoCut: generating concise instructional videos for physical demonstrations","authors":"Pei-Yu Chi, Joyce Liu, Jason Linder, Mira Dontcheva, Wilmot Li, Bjoern Hartmann","doi":"10.1145/2501988.2502052","DOIUrl":"https://doi.org/10.1145/2501988.2502052","url":null,"abstract":"Amateur instructional videos often show a single uninterrupted take of a recorded demonstration without any edits. While easy to produce, such videos are often too long as they include unnecessary or repetitive actions as well as mistakes. We introduce DemoCut, a semi-automatic video editing system that improves the quality of amateur instructional videos for physical tasks. DemoCut asks users to mark key moments in a recorded demonstration using a set of marker types derived from our formative study. Based on these markers, the system uses audio and video analysis to automatically organize the video into meaningful segments and apply appropriate video editing effects. To understand the effectiveness of DemoCut, we report a technical evaluation of seven video tutorials created with DemoCut. In a separate user evaluation, all eight participants successfully created a complete tutorial with a variety of video editing effects using our system.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116522869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Vision","authors":"Tom Yeh","doi":"10.1145/3254704","DOIUrl":"https://doi.org/10.1145/3254704","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122651466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}