Tactile raised-line maps are paper maps widely used by visually impaired people. We designed a mobile technique, based on hand tracking and a smartwatch, in order to leverage pervasive access to virtual maps. We use the smartwatch to render localized text-to-speech and vibratory feedback during hand exploration, but also to provide filtering functions activated by swipe gestures. We conducted a first study to compare the usability of a raised-line map with three virtual maps (plain, with filter, with filter and grid). The results show that virtual maps are usable, and that adding a filter, or a filter and a grid, significantly speeds up data exploration and selection. The results of a following case study showed that visually impaired users were able to achieve a complex task with the device, i.e. finding spatial correlations between two sets of data.
{"title":"From tactile to virtual: using a smartwatch to improve spatial map exploration for visually impaired users","authors":"Sandra Bardot, M. Serrano, C. Jouffrais","doi":"10.1145/2935334.2935342","DOIUrl":"https://doi.org/10.1145/2935334.2935342","url":null,"abstract":"Tactile raised-line maps are paper maps widely used by visually impaired people. We designed a mobile technique, based on hand tracking and a smartwatch, in order to leverage pervasive access to virtual maps. We use the smartwatch to render localized text-to-speech and vibratory feedback during hand exploration, but also to provide filtering functions activated by swipe gestures. We conducted a first study to compare the usability of a raised-line map with three virtual maps (plain, with filter, with filter and grid). The results show that virtual maps are usable, and that adding a filter, or a filter and a grid, significantly speeds up data exploration and selection. The results of a following case study showed that visually impaired users were able to achieve a complex task with the device, i.e. finding spatial correlations between two sets of data.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"123 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133713217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
André Rodrigues, Hugo Nicolau, Kyle Montague, L. Carriço, Tiago Guerreiro
Touch-enabled devices have a growing variety of screen sizes; however, there is little knowledge on the effect of key size on non-visual text-entry performance. We conducted a user study with 12 blind participants to investigate how non-visual input performance varies with four QWERTY keyboard sizes (ranging from 15mm to 2.5mm). This paper presents an analysis of typing performance and touch behaviors discussing its implications for future research. Our findings show that there is an upper limit to the benefits of larger target sizes between 10mm and 15mm. Input speed decreases from 4.5 to 2.4 words per minute (WPM) for targets sizes below 10mm. The smallest size was deemed unusable by participants even though performance was in par with previous work.
{"title":"Effect of target size on non-visual text-entry","authors":"André Rodrigues, Hugo Nicolau, Kyle Montague, L. Carriço, Tiago Guerreiro","doi":"10.1145/2935334.2935376","DOIUrl":"https://doi.org/10.1145/2935334.2935376","url":null,"abstract":"Touch-enabled devices have a growing variety of screen sizes; however, there is little knowledge on the effect of key size on non-visual text-entry performance. We conducted a user study with 12 blind participants to investigate how non-visual input performance varies with four QWERTY keyboard sizes (ranging from 15mm to 2.5mm). This paper presents an analysis of typing performance and touch behaviors discussing its implications for future research. Our findings show that there is an upper limit to the benefits of larger target sizes between 10mm and 15mm. Input speed decreases from 4.5 to 2.4 words per minute (WPM) for targets sizes below 10mm. The smallest size was deemed unusable by participants even though performance was in par with previous work.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"305 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132746369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Wrist and hand interaction I","authors":"M. Rohs","doi":"10.1145/3254084","DOIUrl":"https://doi.org/10.1145/3254084","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129226436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Dalvi, Shashank Ahire, Nagraj Emmadi, Manjiri Joshi, Anirudha N. Joshi, Sanjay Ghosh, Prasad Ghone, Narendra Parmar
As part of an ongoing standardization effort, we were asked to evaluate Marathi text input mechanisms on smartphones. We undertook a between-subject longitudinal evaluation of four existing keyboards with 153 novice users who participated for 31 sessions each, spread over 3--4 weeks. In this paper, we present the empirical results of the performance of these keyboards and discuss them with respect to their designs. We found that keyboards with logical layouts performed marginally better than keyboards with partially frequency-based layouts. Results also showed that users performed poorly on keyboards that have word prediction features in comparison with keyboards that don't have prediction features while typing Marathi. We speculate that this difference in performance is related to a "cognitive toll" that the users pay to use word prediction. We identify several directions for future research.
{"title":"Does prediction really help in Marathi text input?: empirical analysis of a longitudinal study","authors":"G. Dalvi, Shashank Ahire, Nagraj Emmadi, Manjiri Joshi, Anirudha N. Joshi, Sanjay Ghosh, Prasad Ghone, Narendra Parmar","doi":"10.1145/2935334.2935366","DOIUrl":"https://doi.org/10.1145/2935334.2935366","url":null,"abstract":"As part of an ongoing standardization effort, we were asked to evaluate Marathi text input mechanisms on smartphones. We undertook a between-subject longitudinal evaluation of four existing keyboards with 153 novice users who participated for 31 sessions each, spread over 3--4 weeks. In this paper, we present the empirical results of the performance of these keyboards and discuss them with respect to their designs. We found that keyboards with logical layouts performed marginally better than keyboards with partially frequency-based layouts. Results also showed that users performed poorly on keyboards that have word prediction features in comparison with keyboards that don't have prediction features while typing Marathi. We speculate that this difference in performance is related to a \"cognitive toll\" that the users pay to use word prediction. We identify several directions for future research.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133871156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Friends, family and colleagues at work may repeatedly observe how their peers unlock their smartphones. These "insiders" may combine multiple partial observations to form a hypothesis of a target's secret. This changing landscape requires that we update the methods used to assess the security of unlocking mechanisms against human shoulder surfing attacks. In our paper, we introduce a methodology to study shoulder surfing risks in the insider threat model. Our methodology dissects the authentication process into minimal observations by humans. Further processing is based on simulations. The outcome is an estimate of the number of observations needed to break a mechanism. The flexibility of this approach benefits the design of new mechanisms. We demonstrate the application of our methodology by performing an analysis of the SwiPIN scheme published at CHI 2015. Our results indicate that SwiPIN can be defeated reliably by a majority of the population with as few as 6 to 11 observations.
{"title":"See you next time: a model for modern shoulder surfers","authors":"Oliver Wiese, Volker Roth","doi":"10.1145/2935334.2935388","DOIUrl":"https://doi.org/10.1145/2935334.2935388","url":null,"abstract":"Friends, family and colleagues at work may repeatedly observe how their peers unlock their smartphones. These \"insiders\" may combine multiple partial observations to form a hypothesis of a target's secret. This changing landscape requires that we update the methods used to assess the security of unlocking mechanisms against human shoulder surfing attacks. In our paper, we introduce a methodology to study shoulder surfing risks in the insider threat model. Our methodology dissects the authentication process into minimal observations by humans. Further processing is based on simulations. The outcome is an estimate of the number of observations needed to break a mechanism. The flexibility of this approach benefits the design of new mechanisms. We demonstrate the application of our methodology by performing an analysis of the SwiPIN scheme published at CHI 2015. Our results indicate that SwiPIN can be defeated reliably by a majority of the population with as few as 6 to 11 observations.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131571464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective use of personal data is a core utility of modern smartphones. On Android, several challenges make developing compelling personal data applications difficult. First, personal data is stored in isolated silos. Thus, relationships between data from different providers are missing, data must be queried by source of origin rather than meaning and the persistence of different types of data differ greatly. Second, interfaces to these data are inconsistent and complex. In turn, developers are forced to interleave SQL with Java boilerplate, resulting in error-prone code that does not generalize. Our solution is Epistenet: a toolkit that (1) unifies the storage and treatment of mobile personal data; (2) preserves relationships between disparate data; (3) allows for expressive queries based on the meaning of data rather than its source of origin (e.g., one can query for all communications with John while at the park); and, (4) provides a simple, native query interface to facilitate development.
{"title":"Epistenet: facilitating programmatic access & processing of semantically related mobile personal data","authors":"Sauvik Das, Jason Wiese, Jason I. Hong","doi":"10.1145/2935334.2935349","DOIUrl":"https://doi.org/10.1145/2935334.2935349","url":null,"abstract":"Effective use of personal data is a core utility of modern smartphones. On Android, several challenges make developing compelling personal data applications difficult. First, personal data is stored in isolated silos. Thus, relationships between data from different providers are missing, data must be queried by source of origin rather than meaning and the persistence of different types of data differ greatly. Second, interfaces to these data are inconsistent and complex. In turn, developers are forced to interleave SQL with Java boilerplate, resulting in error-prone code that does not generalize. Our solution is Epistenet: a toolkit that (1) unifies the storage and treatment of mobile personal data; (2) preserves relationships between disparate data; (3) allows for expressive queries based on the meaning of data rather than its source of origin (e.g., one can query for all communications with John while at the park); and, (4) provides a simple, native query interface to facilitate development.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122612408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingkun Su, Oscar Kin-Chung Au, Pengfei Xu, Hongbo Fu, Chiew-Lan Tai
In this work we introduce 2D-Dragger, a unified touch-based target acquisition technique that enables easy access to small targets in dense regions or distant targets on screens of various sizes. The effective width of a target is constant with our tool, allowing a fixed scale of finger movement for capturing a new target. Our tool is thus insensitive to the distribution and size of the selectable targets, and consistently works well for screens of different sizes, from mobile to wall-sized screens. Our user studies show that overall 2D-Dragger performs the best compared to the state-of-the-art techniques for selecting both near and distant targets of various sizes in different densities.
{"title":"2D-Dragger: unified touch-based target acquisition with constant effective width","authors":"Qingkun Su, Oscar Kin-Chung Au, Pengfei Xu, Hongbo Fu, Chiew-Lan Tai","doi":"10.1145/2935334.2935339","DOIUrl":"https://doi.org/10.1145/2935334.2935339","url":null,"abstract":"In this work we introduce 2D-Dragger, a unified touch-based target acquisition technique that enables easy access to small targets in dense regions or distant targets on screens of various sizes. The effective width of a target is constant with our tool, allowing a fixed scale of finger movement for capturing a new target. Our tool is thus insensitive to the distribution and size of the selectable targets, and consistently works well for screens of different sizes, from mobile to wall-sized screens. Our user studies show that overall 2D-Dragger performs the best compared to the state-of-the-art techniques for selecting both near and distant targets of various sizes in different densities.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126302604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media platforms and mobile applications increasingly include geographic features and services. While previous research has looked into how people perceive, interpret, and act on information available about a person, the spatial self, an individual's display of mobility through space for identity performance, is underexplored, especially in encounters with strangers. Strangers themselves offer a unique potential for exploring relational contexts and how those may relate to interpreting and reacting to the spatial self. We ran a 3 (map: personal, social, and task) x 3 (relationship: date, friend, coworker) x 2 (gender of participant: female, male) laboratory experiment with a mixed model design to see if and how the spatial self affects interest in future interaction. We find that maps, relationship, and gender all affect the ways in which people interpret and act on expressing interest in an individual. We discuss theoretical and design implications of how spatial selves affect this process.
{"title":"People, places, and perceptions: effects of location check-in awareness on impressions of strangers","authors":"Colin Fitzpatrick, Jeremy P. Birnholtz, D. Gergle","doi":"10.1145/2935334.2935369","DOIUrl":"https://doi.org/10.1145/2935334.2935369","url":null,"abstract":"Social media platforms and mobile applications increasingly include geographic features and services. While previous research has looked into how people perceive, interpret, and act on information available about a person, the spatial self, an individual's display of mobility through space for identity performance, is underexplored, especially in encounters with strangers. Strangers themselves offer a unique potential for exploring relational contexts and how those may relate to interpreting and reacting to the spatial self. We ran a 3 (map: personal, social, and task) x 3 (relationship: date, friend, coworker) x 2 (gender of participant: female, male) laboratory experiment with a mixed model design to see if and how the spatial self affects interest in future interaction. We find that maps, relationship, and gender all affect the ways in which people interpret and act on expressing interest in an individual. We discuss theoretical and design implications of how spatial selves affect this process.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130822236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min-Chieh Hsiu, Chiuan Wang, Da-Yuan Huang, Jhe-Wei Lin, Yu-Chih Lin, De-Nian Yang, Y. Hung, Mike Y. Chen
Force sensing has been widely used for bringing the touch from binary to multiple states, creating new abilities on surface interactions. However, prior proposed force sensing techniques mainly focus on enabling force-applied gestures on certain devices. This paper presents Nail+, a technique using fingernail deformation to enable force touch sensing interactions on everyday rigid surfaces. Our prototype, 3x3 0.2mm strain sensor array mounted on a fingernail, was implemented and conducted with a 12-participant study for evaluating the feasibility of this sensing approach. Result showed that the accuracy for sensing normal and force-applied tapping and swiping can achieve 84.67% on average. We finally proposed two example applications using Nail+ prototype for controlling the interfaces of head-mounted display (HMD) devices and remote screens.
{"title":"Nail+: sensing fingernail deformation to detect finger force touch interactions on rigid surfaces","authors":"Min-Chieh Hsiu, Chiuan Wang, Da-Yuan Huang, Jhe-Wei Lin, Yu-Chih Lin, De-Nian Yang, Y. Hung, Mike Y. Chen","doi":"10.1145/2935334.2935362","DOIUrl":"https://doi.org/10.1145/2935334.2935362","url":null,"abstract":"Force sensing has been widely used for bringing the touch from binary to multiple states, creating new abilities on surface interactions. However, prior proposed force sensing techniques mainly focus on enabling force-applied gestures on certain devices. This paper presents Nail+, a technique using fingernail deformation to enable force touch sensing interactions on everyday rigid surfaces. Our prototype, 3x3 0.2mm strain sensor array mounted on a fingernail, was implemented and conducted with a 12-participant study for evaluating the feasibility of this sensing approach. Result showed that the accuracy for sensing normal and force-applied tapping and swiping can achieve 84.67% on average. We finally proposed two example applications using Nail+ prototype for controlling the interfaces of head-mounted display (HMD) devices and remote screens.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114640917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aske Mottelson, Christoffer Larsen, Mikkel Lyderik, Paul Strohmeier, Jarrod Knibbe
The small displays of smartwatches make text entry difficult and time consuming. While text entry rates can be increased, this continues to occur at the expense of available screen display space. Soft keyboards can easily use half the display space of tiny-screened devices. To combat this problem, we present Invisiboard: an invisible text entry method using the entire display for both text entry and display simultaneously. Invisiboard combines a numberpad-like layout with swipe gestures. This maximizes input target size, provides a familiar layout, and maximizes display space. Through this, Invisiboard achieves entry rates comparable or even faster than an existing research baseline. A user study with 12 participants writing 3264 words revealed an entry rate of 10.6 Words Per Minute (WPM) after 30 minutes, 7% faster than ZoomBoard. Furthermore, with nominal training, some participants demonstrated entry rates of over 30 WPM.
{"title":"Invisiboard: maximizing display and input space with a full screen text entry method for smartwatches","authors":"Aske Mottelson, Christoffer Larsen, Mikkel Lyderik, Paul Strohmeier, Jarrod Knibbe","doi":"10.1145/2935334.2935360","DOIUrl":"https://doi.org/10.1145/2935334.2935360","url":null,"abstract":"The small displays of smartwatches make text entry difficult and time consuming. While text entry rates can be increased, this continues to occur at the expense of available screen display space. Soft keyboards can easily use half the display space of tiny-screened devices. To combat this problem, we present Invisiboard: an invisible text entry method using the entire display for both text entry and display simultaneously. Invisiboard combines a numberpad-like layout with swipe gestures. This maximizes input target size, provides a familiar layout, and maximizes display space. Through this, Invisiboard achieves entry rates comparable or even faster than an existing research baseline. A user study with 12 participants writing 3264 words revealed an entry rate of 10.6 Words Per Minute (WPM) after 30 minutes, 7% faster than ZoomBoard. Furthermore, with nominal training, some participants demonstrated entry rates of over 30 WPM.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132494799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}