Touch input is increasingly popular nowadays. The human finger has considerably large fingertip size and finger input is imprecise. Acquiring small targets on a touch screen is still a challenging task. In this extended abstract, we present the LinearDragger, a new and integrated one-finger target acquisition technique for small and clustered targets. It allows users to select targets in dense clustered groups easily with a single touch-drag-release operation and maps the 2D selection problem into a more precise 1D selection problem, which is independent of the target distribution. Besides, it also avoids finger occlusion and does not create visual distraction. LinearDragger is particularly suitable for applications with dense targets and rich visual elements.
{"title":"LinearDragger: a linear selector for one-finger target acquisition","authors":"Oscar Kin-Chung Au, Xiaojun Su, Rynson W. H. Lau","doi":"10.1145/2559206.2574791","DOIUrl":"https://doi.org/10.1145/2559206.2574791","url":null,"abstract":"Touch input is increasingly popular nowadays. The human finger has considerably large fingertip size and finger input is imprecise. Acquiring small targets on a touch screen is still a challenging task. In this extended abstract, we present the LinearDragger, a new and integrated one-finger target acquisition technique for small and clustered targets. It allows users to select targets in dense clustered groups easily with a single touch-drag-release operation and maps the 2D selection problem into a more precise 1D selection problem, which is independent of the target distribution. Besides, it also avoids finger occlusion and does not create visual distraction. LinearDragger is particularly suitable for applications with dense targets and rich visual elements.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126653722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current standard PIN entry systems for mobile devices are not safe to shoulder surfing. In this paper, we present VibraInput, a two-step PIN entry system based on the combination of vibration and visual information for mobile devices. This system only uses four vibration patterns, with which users enter a digit by two distinct selections. We believe that this design secures PIN entry, and allows users to easily remember and recognize the patterns. Moreover, it can be implemented on current off-the-shelf mobile devices. We designed two kinds of prototypes of VibraInput. The experiment shows that the mean failure rate is 4.0%; moreover, the system shows good security properties.
{"title":"Vibrainput: two-step PIN entry system based on vibration and visual information","authors":"T. Kuribara, B. Shizuki, J. Tanaka","doi":"10.1145/2559206.2581187","DOIUrl":"https://doi.org/10.1145/2559206.2581187","url":null,"abstract":"Current standard PIN entry systems for mobile devices are not safe to shoulder surfing. In this paper, we present VibraInput, a two-step PIN entry system based on the combination of vibration and visual information for mobile devices. This system only uses four vibration patterns, with which users enter a digit by two distinct selections. We believe that this design secures PIN entry, and allows users to easily remember and recognize the patterns. Moreover, it can be implemented on current off-the-shelf mobile devices. We designed two kinds of prototypes of VibraInput. The experiment shows that the mean failure rate is 4.0%; moreover, the system shows good security properties.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nadia Elouali, Xavier Le Pallec, J. Rouillard, Jean-Claude Tarby
In recent years, there has been an increasing interest in the presence of sensors in mobile devices. Emergence of multiple modalities based on these sensors greatly enriches the human-mobile interaction. However, mobile applications slightly involve sensors and rarely combine them simultaneously. In this paper, we seek to remedy this problem by detailing the key challenges that face developers who want to integrate several sensor-based modalities and combine them. We then present our model-based approach solution. We introduce M4L modeling language and MIMIC framework that aim to produce easily sensor-based multimodal mobile applications by generating up to 100% of their interfaces.
{"title":"MIMIC: leveraging sensor-based interactions in multimodal mobile applications","authors":"Nadia Elouali, Xavier Le Pallec, J. Rouillard, Jean-Claude Tarby","doi":"10.1145/2559206.2581222","DOIUrl":"https://doi.org/10.1145/2559206.2581222","url":null,"abstract":"In recent years, there has been an increasing interest in the presence of sensors in mobile devices. Emergence of multiple modalities based on these sensors greatly enriches the human-mobile interaction. However, mobile applications slightly involve sensors and rarely combine them simultaneously. In this paper, we seek to remedy this problem by detailing the key challenges that face developers who want to integrate several sensor-based modalities and combine them. We then present our model-based approach solution. We introduce M4L modeling language and MIMIC framework that aim to produce easily sensor-based multimodal mobile applications by generating up to 100% of their interfaces.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116412573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flexible displays offer new interaction techniques, such as bend gestures, but a little work has been done to support touch input, the most common input for handheld displays. In this paper, we explore touch input using the thumb of the holding hand, and compare it for different tapping tasks, between a flexible and a rigid tablet. We present initial design guidelines to use touch input with thumb in flexible devices. Our result suggests that users can perform tapping interaction using thumb input in both rigid and flexible devices with similar accuracy, and they prefer holding the display on the side or the bottom corner over the bottom center.
{"title":"Exploring tapping with thumb input for flexible tablets","authors":"M. Riyadh","doi":"10.1145/2559206.2579422","DOIUrl":"https://doi.org/10.1145/2559206.2579422","url":null,"abstract":"Flexible displays offer new interaction techniques, such as bend gestures, but a little work has been done to support touch input, the most common input for handheld displays. In this paper, we explore touch input using the thumb of the holding hand, and compare it for different tapping tasks, between a flexible and a rigid tablet. We present initial design guidelines to use touch input with thumb in flexible devices. Our result suggests that users can perform tapping interaction using thumb input in both rigid and flexible devices with similar accuracy, and they prefer holding the display on the side or the bottom corner over the bottom center.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117083993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a pilot study investigating the relationship between frame rate and latency and their effects on moving target selection. In several latency/frame rate conditions, participants were given a 20 second time frame to click as many moving targets as possible. Performance with 60 FPS frame rate was 14% higher than 30 FPS, but the difference between 45 and 60 FPS was not significant. Latency alone had lower impact than the corresponding frame rate difference. While both factors impact performance, frame rate had a larger effect than the latency it introduces.
{"title":"Is 60 FPS better than 30?: the impact of frame rate and latency on moving target selection","authors":"Benjamin F. Janzen, Robert J. Teather","doi":"10.1145/2559206.2581214","DOIUrl":"https://doi.org/10.1145/2559206.2581214","url":null,"abstract":"We present a pilot study investigating the relationship between frame rate and latency and their effects on moving target selection. In several latency/frame rate conditions, participants were given a 20 second time frame to click as many moving targets as possible. Performance with 60 FPS frame rate was 14% higher than 30 FPS, but the difference between 45 and 60 FPS was not significant. Latency alone had lower impact than the corresponding frame rate difference. While both factors impact performance, frame rate had a larger effect than the latency it introduces.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129733298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Baumer, June Ahn, Mei Bie, Elizabeth M. Bonsignore, Ahmet Börütecene, O. Buruk, Tamara L. Clegg, A. Druin, Florian Echtler, D. Gruen, Mona Leigh Guha, Chelsea Hordatt, A. Krüger, S. Maidenbaum, Meethu Malu, Brenna McNally, Michael J. Muller, Leyla Norooz, J. Norton, Oğuzhan Özcan, Donald J. Patterson, A. Riener, Steven I. Ross, Karen Rust, Johannes Schöning, M. S. Silberman, Bill Tomlinson, Jason C. Yip
This paper presents a curated collection of fictional abstracts for papers that could appear in the proceedings of the 2039 CHI Conference. It provides an opportunity to consider the various visions guiding work in HCI, the futures toward which we (believe we) are working, and how research in the field might relate with broader social, political, and cultural changes over the next quarter century.
{"title":"CHI 2039: speculative research visions","authors":"E. Baumer, June Ahn, Mei Bie, Elizabeth M. Bonsignore, Ahmet Börütecene, O. Buruk, Tamara L. Clegg, A. Druin, Florian Echtler, D. Gruen, Mona Leigh Guha, Chelsea Hordatt, A. Krüger, S. Maidenbaum, Meethu Malu, Brenna McNally, Michael J. Muller, Leyla Norooz, J. Norton, Oğuzhan Özcan, Donald J. Patterson, A. Riener, Steven I. Ross, Karen Rust, Johannes Schöning, M. S. Silberman, Bill Tomlinson, Jason C. Yip","doi":"10.1145/2559206.2578864","DOIUrl":"https://doi.org/10.1145/2559206.2578864","url":null,"abstract":"This paper presents a curated collection of fictional abstracts for papers that could appear in the proceedings of the 2039 CHI Conference. It provides an opportunity to consider the various visions guiding work in HCI, the futures toward which we (believe we) are working, and how research in the field might relate with broader social, political, and cultural changes over the next quarter century.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128419155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edward Nguyen, Tanmay Modak, Elton Dias, Yang Yu, Liang Huang
Current mobile health apps allow users to track and monitor their fitness statistics and enjoy exercise. As the next generation of mobile devices arrives, new apps must be developed to improve upon current exercise experiences. We introduce Fitnamo, a mobile health app designed for Google Glass. Fitnamo offers entertaining exercise routines through the use of augmented reality games and uses a novel motivational nudging system to encourage users to be active.
{"title":"Fitnamo: using bodydata to encourage exercise through google glass™","authors":"Edward Nguyen, Tanmay Modak, Elton Dias, Yang Yu, Liang Huang","doi":"10.1145/2559206.2580933","DOIUrl":"https://doi.org/10.1145/2559206.2580933","url":null,"abstract":"Current mobile health apps allow users to track and monitor their fitness statistics and enjoy exercise. As the next generation of mobile devices arrives, new apps must be developed to improve upon current exercise experiences. We introduce Fitnamo, a mobile health app designed for Google Glass. Fitnamo offers entertaining exercise routines through the use of augmented reality games and uses a novel motivational nudging system to encourage users to be active.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128464532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web-based software tutorials contain a wealth of information describing software tasks and workflows. There is growing interest in mining these resources for task modeling, automation, machine-guided help, interface search, and other applications. As a first step, past work has shown success in extracting individual commands from textual instructions. In this paper, we ask: How much further do we have to go to more fully interpret or automate a tutorial? We take a bottom-up approach, asking what it would take to: (1) interpret individual steps, (2) follow sequences of steps, and (3) locate procedural content in larger texts.
{"title":"Mining online software tutorials: challenges and open problems","authors":"Adam Fourney, Michael A. Terry","doi":"10.1145/2559206.2578862","DOIUrl":"https://doi.org/10.1145/2559206.2578862","url":null,"abstract":"Web-based software tutorials contain a wealth of information describing software tasks and workflows. There is growing interest in mining these resources for task modeling, automation, machine-guided help, interface search, and other applications. As a first step, past work has shown success in extracting individual commands from textual instructions. In this paper, we ask: How much further do we have to go to more fully interpret or automate a tutorial? We take a bottom-up approach, asking what it would take to: (1) interpret individual steps, (2) follow sequences of steps, and (3) locate procedural content in larger texts.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128471262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The replication or recreation of research is a core part of many disciplines. Yet unlike many other disciplines, like medicine, physics, or mathematics, we have almost no drive and barely any reason to consider investigating the work of other HCI researchers. Our community is driven to publish novel results in novel spaces using novel designs, and to keep up with evolving technology. Further, our community contains a broad spectrum of research styles, from those that would aim to investigate cultural phenomenon observed with ethnographic measures, to those who would validate or refute prior work with experimental methods. The aim of this workshop is to continue to facilitate a cultural shift towards our community naturally adopting replication techniques in situations that are considered worth investigating.
{"title":"RepliCHI: the workshop II","authors":"Max L. Wilson, Ed H. Chi, S. Reeves, D. Coyle","doi":"10.1145/2559206.2559233","DOIUrl":"https://doi.org/10.1145/2559206.2559233","url":null,"abstract":"The replication or recreation of research is a core part of many disciplines. Yet unlike many other disciplines, like medicine, physics, or mathematics, we have almost no drive and barely any reason to consider investigating the work of other HCI researchers. Our community is driven to publish novel results in novel spaces using novel designs, and to keep up with evolving technology. Further, our community contains a broad spectrum of research styles, from those that would aim to investigate cultural phenomenon observed with ethnographic measures, to those who would validate or refute prior work with experimental methods. The aim of this workshop is to continue to facilitate a cultural shift towards our community naturally adopting replication techniques in situations that are considered worth investigating.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128808207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paddle is a highly deformable mobile device that leverages engineering principles from the design of the Rubik's Magic, a folding plate puzzle. The various transformations supported by Paddle bridges the gap between differently sized mobile devices available nowadays, such as phones, armbands, tablets and game controllers. Besides this, Paddle can be transformed to different physical controls in only a few steps, such as peeking options, a ring to scroll through lists and a book-like form factor to leaf through pages. These special-purpose physical controls have the advantage of providing clear physical affordances and exploiting people's innate abilities for manipulating objects in the real world. We investigated the benefits of these interaction techniques in detail in [1]. In contrast to traditional touch screens, physical controls are usually less flexible and therefore less suitable for mobile settings. Paddle, shows how mobile devices can be designed to bring physical controls to mobile devices and thus combine the flexibility of touch screens with the physical qualities that real world controls provide. Our current prototype is tracked with an optical tracking system and uses a projector to provide visual output. In the future, we envision devices similar to Paddle that are entirely self-contained, using tiny integrated displays.
{"title":"Paddle: highly deformable mobile devices with physical controls","authors":"Raf Ramakers, Johannes Schöning, K. Luyten","doi":"10.1145/2559206.2579524","DOIUrl":"https://doi.org/10.1145/2559206.2579524","url":null,"abstract":"Paddle is a highly deformable mobile device that leverages engineering principles from the design of the Rubik's Magic, a folding plate puzzle. The various transformations supported by Paddle bridges the gap between differently sized mobile devices available nowadays, such as phones, armbands, tablets and game controllers. Besides this, Paddle can be transformed to different physical controls in only a few steps, such as peeking options, a ring to scroll through lists and a book-like form factor to leaf through pages. These special-purpose physical controls have the advantage of providing clear physical affordances and exploiting people's innate abilities for manipulating objects in the real world. We investigated the benefits of these interaction techniques in detail in [1]. In contrast to traditional touch screens, physical controls are usually less flexible and therefore less suitable for mobile settings. Paddle, shows how mobile devices can be designed to bring physical controls to mobile devices and thus combine the flexibility of touch screens with the physical qualities that real world controls provide. Our current prototype is tracked with an optical tracking system and uses a projector to provide visual output. In the future, we envision devices similar to Paddle that are entirely self-contained, using tiny integrated displays.","PeriodicalId":125796,"journal":{"name":"CHI '14 Extended Abstracts on Human Factors in Computing Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128524505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}