Julian Seifert, Sebastian Boring, Christian Winkler, F. Schaub, Fabian Schwab, Steffen Herrdum, Fabian Maier, Daniel Mayer, E. Rukzio
Handheld displays enable flexible spatial exploration of information spaces -- users can physically navigate through three-dimensional space to access information at specific locations. Having users constantly hold the display, however, has several limitations: (1) inaccuracies due to natural hand tremors; (2) fatigue over time; and (3) limited exploration within arm's reach. We investigate autonomous, self-actuated displays that can freely move and hold their position and orientation in space without users having to hold them at all times. We illustrate various stages of such a display's autonomy ranging from manual to fully autonomous, which -- depending on the tasks -- facilitate the interaction. Further, we discuss possible motion control mechanisms for these displays and present several interaction techniques enabled by such displays. Our Hover Pad toolkit enables exploring five degrees of freedom of self-actuated and autonomous displays and the developed control and interaction techniques. We illustrate the utility of our toolkit with five prototype applications, such as a volumetric medical data explorer.
{"title":"Hover Pad: interacting with autonomous and self-actuated displays in space","authors":"Julian Seifert, Sebastian Boring, Christian Winkler, F. Schaub, Fabian Schwab, Steffen Herrdum, Fabian Maier, Daniel Mayer, E. Rukzio","doi":"10.1145/2642918.2647385","DOIUrl":"https://doi.org/10.1145/2642918.2647385","url":null,"abstract":"Handheld displays enable flexible spatial exploration of information spaces -- users can physically navigate through three-dimensional space to access information at specific locations. Having users constantly hold the display, however, has several limitations: (1) inaccuracies due to natural hand tremors; (2) fatigue over time; and (3) limited exploration within arm's reach. We investigate autonomous, self-actuated displays that can freely move and hold their position and orientation in space without users having to hold them at all times. We illustrate various stages of such a display's autonomy ranging from manual to fully autonomous, which -- depending on the tasks -- facilitate the interaction. Further, we discuss possible motion control mechanisms for these displays and present several interaction techniques enabled by such displays. Our Hover Pad toolkit enables exploring five degrees of freedom of self-actuated and autonomous displays and the developed control and interaction techniques. We illustrate the utility of our toolkit with five prototype applications, such as a volumetric medical data explorer.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"103 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85849223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hosio, Jorge Gonçalves, V. Lehdonvirta, Denzil Ferreira, V. Kostakos
Research is increasingly highlighting the potential for situated crowdsourcing to overcome some crucial limitations of online crowdsourcing. However, it remains unclear whether a situated crowdsourcing market can be sustained, and whether worker supply responds to price-setting in such a market. Our work is the first to systematically investigate workers' behaviour and response to economic incentives in a situated crowdsourcing market. We show that the market-based model is a sustainable approach to recruiting workers and obtaining situated crowdsourcing contributions. We also show that the price mechanism is a very effective tool for adjusting the supply of labour in a situated crowdsourcing market. Our work advances the body of work investigating situated crowdsourcing.
{"title":"Situated crowdsourcing using a market model","authors":"S. Hosio, Jorge Gonçalves, V. Lehdonvirta, Denzil Ferreira, V. Kostakos","doi":"10.1145/2642918.2647362","DOIUrl":"https://doi.org/10.1145/2642918.2647362","url":null,"abstract":"Research is increasingly highlighting the potential for situated crowdsourcing to overcome some crucial limitations of online crowdsourcing. However, it remains unclear whether a situated crowdsourcing market can be sustained, and whether worker supply responds to price-setting in such a market. Our work is the first to systematically investigate workers' behaviour and response to economic incentives in a situated crowdsourcing market. We show that the market-based model is a sustainable approach to recruiting workers and obtaining situated crowdsourcing contributions. We also show that the price mechanism is a very effective tool for adjusting the supply of labour in a situated crowdsourcing market. Our work advances the body of work investigating situated crowdsourcing.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80387409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although reliefs are frequently used to add patterns to product surfaces, there is a lack of interaction techniques to model reliefs on the surface of virtual objects. We adopted the repoussé and chasing artwork techniques in an alternative interaction technique to model relief on virtual surfaces. To support this interaction technique, we developed the double-sided touchpad Trampoline that can detect the position and force of a finger touch on both sides. Additionally, Trampoline provides users with elastic feedback, as its surface consists of a stretchable fabric. We implemented a relief application with this device and the developed interaction technique. An informal user study showed that the proposed system can be a promising solution to create reliefs.
{"title":"Trampoline: a double-sided elastic touch device for creating reliefs","authors":"Jaehyun Han, Jiseong Gu, Geehyuk Lee","doi":"10.1145/2642918.2647381","DOIUrl":"https://doi.org/10.1145/2642918.2647381","url":null,"abstract":"Although reliefs are frequently used to add patterns to product surfaces, there is a lack of interaction techniques to model reliefs on the surface of virtual objects. We adopted the repoussé and chasing artwork techniques in an alternative interaction technique to model relief on virtual surfaces. To support this interaction technique, we developed the double-sided touchpad Trampoline that can detect the position and force of a finger touch on both sides. Additionally, Trampoline provides users with elastic feedback, as its surface consists of a stretchable fabric. We implemented a relief application with this device and the developed interaction technique. An informal user study showed that the proposed system can be a promising solution to create reliefs.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"703 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84395510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present WristFlex, an always-available on-body gestural interface. Using an array of force sensitive resistors (FSRs) worn around the wrist, the interface can distinguish subtle finger pinch gestures with high accuracy (>80 %) and speed. The system is trained to classify gestures from subtle tendon movements on the wrist. We demonstrate that WristFlex is a complete system that works wirelessly in real-time. The system is simple and light-weight in terms of power consumption and computational overhead. WristFlex's sensor power consumption is 60.7 uW, allowing the prototype to potentially last more then a week on a small lithium polymer battery. Also, WristFlex is small and non-obtrusive, and can be integrated into a wristwatch or a bracelet. We perform user studies to evaluate the accuracy, speed, and repeatability. We demonstrate that the number of gestures can be extended with orientation data from an accelerometer. We conclude by showing example applications.
{"title":"WristFlex: low-power gesture input with wrist-worn pressure sensors","authors":"A. Dementyev, J. Paradiso","doi":"10.1145/2642918.2647396","DOIUrl":"https://doi.org/10.1145/2642918.2647396","url":null,"abstract":"In this paper we present WristFlex, an always-available on-body gestural interface. Using an array of force sensitive resistors (FSRs) worn around the wrist, the interface can distinguish subtle finger pinch gestures with high accuracy (>80 %) and speed. The system is trained to classify gestures from subtle tendon movements on the wrist. We demonstrate that WristFlex is a complete system that works wirelessly in real-time. The system is simple and light-weight in terms of power consumption and computational overhead. WristFlex's sensor power consumption is 60.7 uW, allowing the prototype to potentially last more then a week on a small lithium polymer battery. Also, WristFlex is small and non-obtrusive, and can be integrated into a wristwatch or a bracelet. We perform user studies to evaluate the accuracy, speed, and repeatability. We demonstrate that the number of gestures can be extended with orientation data from an accelerometer. We conclude by showing example applications.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79457886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PrintScreen is an enabling technology for digital fabrication of customized flexible displays using thin-film electroluminescence (TFEL). It enables inexpensive and rapid fabrication of highly customized displays in low volume, in a simple lab environment, print shop or even at home. We show how to print ultra-thin (120 µm) segmented and passive matrix displays in greyscale or multi-color on a variety of deformable and rigid substrate materials, including PET film, office paper, leather, metal, stone, and wood. The displays can have custom, unconventional 2D shapes and can be bent, rolled and folded to create 3D shapes. We contribute a systematic overview of graphical display primitives for customized displays and show how to integrate them with static print and printed electronics. Furthermore, we contribute a sensing framework, which leverages the display itself for touch sensing. To demonstrate the wide applicability of PrintScreen, we present application examples from ubiquitous, mobile and wearable computing.
{"title":"PrintScreen: fabricating highly customizable thin-film touch-displays","authors":"Simon Olberding, Michael Wessely, Jürgen Steimle","doi":"10.1145/2642918.2647413","DOIUrl":"https://doi.org/10.1145/2642918.2647413","url":null,"abstract":"PrintScreen is an enabling technology for digital fabrication of customized flexible displays using thin-film electroluminescence (TFEL). It enables inexpensive and rapid fabrication of highly customized displays in low volume, in a simple lab environment, print shop or even at home. We show how to print ultra-thin (120 µm) segmented and passive matrix displays in greyscale or multi-color on a variety of deformable and rigid substrate materials, including PET film, office paper, leather, metal, stone, and wood. The displays can have custom, unconventional 2D shapes and can be bent, rolled and folded to create 3D shapes. We contribute a systematic overview of graphical display primitives for customized displays and show how to integrate them with static print and printed electronics. Furthermore, we contribute a sensing framework, which leverages the display itself for touch sensing. To demonstrate the wide applicability of PrintScreen, we present application examples from ubiquitous, mobile and wearable computing.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77277066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examined background characteristics of virtual reality participants in order to determine correlations to cybersickness. As 3D media and new VR display technologies from companies such as Occulus and Sony become more popular, the incidence of cybersickness is likely to increase. Understanding the impact of individual backgrounds on susceptibility can help shed light on which individuals are more likely to be impacted. Past history of motion sickness and video game play have the best predictive power of cybersickness of the factors studied. A model to estimate the likelihood of cybersickness using background characteristics is posed.
{"title":"Individual variation in susceptibility to cybersickness","authors":"Lisa Rebenitsch, C. Owen","doi":"10.1145/2642918.2647394","DOIUrl":"https://doi.org/10.1145/2642918.2647394","url":null,"abstract":"We examined background characteristics of virtual reality participants in order to determine correlations to cybersickness. As 3D media and new VR display technologies from companies such as Occulus and Sony become more popular, the incidence of cybersickness is likely to increase. Understanding the impact of individual backgrounds on susceptibility can help shed light on which individuals are more likely to be impacted. Past history of motion sickness and video game play have the best predictive power of cybersickness of the factors studied. A model to estimate the likelihood of cybersickness using background characteristics is posed.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"207 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82825996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Afergan, T. Shibata, Samuel W. Hincks, Evan M. Peck, B. Yuksel, Remco Chang, R. Jacob
The bubble cursor is a promising cursor expansion technique, improving a user's movement time and accuracy in pointing tasks. We introduce a brain-based target expansion system, which improves the efficacy of bubble cursor by increasing the expansion of high importance targets at the optimal time based on brain measurements correlated to a particular type of multitasking. We demonstrate through controlled experiments that brain-based target expansion can deliver a graded and continuous level of assistance to a user according to their cognitive state, thereby improving task and speed-accuracy metrics, even without explicit visual changes to the system. Such an adaptation is ideal for use in complex systems to steer users toward higher priority goals during times of increased demand.
{"title":"Brain-based target expansion","authors":"Daniel Afergan, T. Shibata, Samuel W. Hincks, Evan M. Peck, B. Yuksel, Remco Chang, R. Jacob","doi":"10.1145/2642918.2647414","DOIUrl":"https://doi.org/10.1145/2642918.2647414","url":null,"abstract":"The bubble cursor is a promising cursor expansion technique, improving a user's movement time and accuracy in pointing tasks. We introduce a brain-based target expansion system, which improves the efficacy of bubble cursor by increasing the expansion of high importance targets at the optimal time based on brain measurements correlated to a particular type of multitasking. We demonstrate through controlled experiments that brain-based target expansion can deliver a graded and continuous level of assistance to a user according to their cognitive state, thereby improving task and speed-accuracy metrics, even without explicit visual changes to the system. Such an adaptation is ideal for use in complex systems to steer users toward higher priority goals during times of increased demand.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83008357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keyboard optimization is concerned with the design of keyboards for different terminals, languages, user groups, and tasks. Previous work in HCI has used random search based methods, such as simulated annealing. These "black box" approaches are convenient, because good solutions are found quickly and no assumption must be made about the objective function. This paper contributes by developing integer programming (IP) as a complementary approach. To this end, we present IP formulations for the letter assignment problem and solve them by branch-and-bound. Although computationally expensive, we show that IP offers two strong benefits. First, its structured non-random search approach improves the out- comes. Second, it guarantees bounds, which increases the designer's confidence over the quality of results. We report improvements to three keyboard optimization cases.
{"title":"Improvements to keyboard optimization with integer programming","authors":"A. Karrenbauer, Antti Oulasvirta","doi":"10.1145/2642918.2647382","DOIUrl":"https://doi.org/10.1145/2642918.2647382","url":null,"abstract":"Keyboard optimization is concerned with the design of keyboards for different terminals, languages, user groups, and tasks. Previous work in HCI has used random search based methods, such as simulated annealing. These \"black box\" approaches are convenient, because good solutions are found quickly and no assumption must be made about the objective function. This paper contributes by developing integer programming (IP) as a complementary approach. To this end, we present IP formulations for the letter assignment problem and solve them by branch-and-bound. Although computationally expensive, we show that IP offers two strong benefits. First, its structured non-random search approach improves the out- comes. Second, it guarantees bounds, which increases the designer's confidence over the quality of results. We report improvements to three keyboard optimization cases.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90182798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jérémy Frey, Renaud Gervais, Stéphanie Fleck, F. Lotte, M. Hachet
We introduce Teegi, a Tangible ElectroEncephaloGraphy (EEG) Interface that enables novice users to get to know more about something as complex as brain signals, in an easy, engaging and informative way. To this end, we have designed a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. An exploration study has shown that interacting with Teegi seems to be easy, motivating, reliable and informative. Overall, this suggests that Teegi is a promising and relevant training and mediation tool for the general public.
{"title":"Teegi: tangible EEG interface","authors":"Jérémy Frey, Renaud Gervais, Stéphanie Fleck, F. Lotte, M. Hachet","doi":"10.1145/2642918.2647368","DOIUrl":"https://doi.org/10.1145/2642918.2647368","url":null,"abstract":"We introduce Teegi, a Tangible ElectroEncephaloGraphy (EEG) Interface that enables novice users to get to know more about something as complex as brain signals, in an easy, engaging and informative way. To this end, we have designed a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. An exploration study has shown that interacting with Teegi seems to be easy, motivating, reliable and informative. Overall, this suggests that Teegi is a promising and relevant training and mediation tool for the general public.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83685597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While email is a major conduit for information sharing in enterprise, there has been little work on exploring the files sent along with these messages -- attachments. These accompanying documents can be large (multiple megabytes), lengthy (multiple pages), and not optimized for the smaller screen sizes, limited reading time, and expensive bandwidth of mobile users. Thus, attachments can increase data storage costs (for both end users and email servers), drain users' time when irrelevant, cause important information to be missed when ignored, and pose a serious access issue for mobile users. To address these problems we created AttachMate, a novel email attachment summarization system. AttachMate can summarize the content of email attachments and automatically insert the summary into the text of the email. AttachMate also stores all files in the cloud, reducing file storage costs and bandwidth consumption. In this paper, the primary contribution is the AttachMate client/server architecture. To ground, support and validate the AttachMate system we present two upfront studies (813 participants) to understand the state and limitations of attachments, a novel algorithm to extract representative concept sentences (tested through two validation studies), and a user study of AttachMate within an enterprise.
{"title":"AttachMate: highlight extraction from email attachments","authors":"J. Hailpern, S. Asur, Kyle Rector","doi":"10.1145/2642918.2647419","DOIUrl":"https://doi.org/10.1145/2642918.2647419","url":null,"abstract":"While email is a major conduit for information sharing in enterprise, there has been little work on exploring the files sent along with these messages -- attachments. These accompanying documents can be large (multiple megabytes), lengthy (multiple pages), and not optimized for the smaller screen sizes, limited reading time, and expensive bandwidth of mobile users. Thus, attachments can increase data storage costs (for both end users and email servers), drain users' time when irrelevant, cause important information to be missed when ignored, and pose a serious access issue for mobile users. To address these problems we created AttachMate, a novel email attachment summarization system. AttachMate can summarize the content of email attachments and automatically insert the summary into the text of the email. AttachMate also stores all files in the cloud, reducing file storage costs and bandwidth consumption. In this paper, the primary contribution is the AttachMate client/server architecture. To ground, support and validate the AttachMate system we present two upfront studies (813 participants) to understand the state and limitations of attachments, a novel algorithm to extract representative concept sentences (tested through two validation studies), and a user study of AttachMate within an enterprise.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87124268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}