Previous work in the literature has shown that back-of-device (BoD) authentication is significantly more secure than standard front-facing approaches. However, the only BoD method available to date (Bod Shapes) is difficult to perform, especially with one hand. In this paper we propose Bod Taps, a novel approach that simplifies BoD authentication while improving its usage. A controlled evaluation with 12 users revealed that Bod Taps and Bod Shapes perform equally good at unlocking the device, but Bod Taps allows users to enter passwords about twice faster than Bod Shapes. Moreover, Bod Taps is perceived as being more usable and less frustrating than Bod Shapes, either using one or two hands.
{"title":"BoD taps: an improved back-of-device authentication technique on smartphones","authors":"Luis A. Leiva, A. Catalá","doi":"10.1145/2628363.2628372","DOIUrl":"https://doi.org/10.1145/2628363.2628372","url":null,"abstract":"Previous work in the literature has shown that back-of-device (BoD) authentication is significantly more secure than standard front-facing approaches. However, the only BoD method available to date (Bod Shapes) is difficult to perform, especially with one hand. In this paper we propose Bod Taps, a novel approach that simplifies BoD authentication while improving its usage. A controlled evaluation with 12 users revealed that Bod Taps and Bod Shapes perform equally good at unlocking the device, but Bod Taps allows users to enter passwords about twice faster than Bod Shapes. Moreover, Bod Taps is perceived as being more usable and less frustrating than Bod Shapes, either using one or two hands.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"5 1","pages":"63-66"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86190253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gestures above a mobile phone would let users interact with their devices quickly and easily from a distance. While both researchers and smartphone manufacturers develop new gesture sensing technologies, little is known about how best to design these gestures and interaction techniques. Our research looks at creating usable and socially acceptable above-device interaction techniques. We present an initial gesture collection, a preliminary evaluation of these gestures and some design recommendations. Our findings identify interesting areas for future research and will help designers create better gesture interfaces.
{"title":"Towards usable and acceptable above-device interactions","authors":"Euan Freeman, S. Brewster, V. Lantz","doi":"10.1145/2628363.2634215","DOIUrl":"https://doi.org/10.1145/2628363.2634215","url":null,"abstract":"Gestures above a mobile phone would let users interact with their devices quickly and easily from a distance. While both researchers and smartphone manufacturers develop new gesture sensing technologies, little is known about how best to design these gestures and interaction techniques. Our research looks at creating usable and socially acceptable above-device interaction techniques. We present an initial gesture collection, a preliminary evaluation of these gestures and some design recommendations. Our findings identify interesting areas for future research and will help designers create better gesture interfaces.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"40 1","pages":"459-464"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86118280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyojeong Shin, Taiwoo Park, Seungwoo Kang, Bupjae Lee, Junehwa Song, Yohan Chon, H. Cha
Finding a missing child is an important problem concerning not only parents but also our society. It is essential and natural to use serendipitous clues from neighbors for finding a missing child. In this paper, we explore a new architecture of crowd collaboration to expedite this mission-critical process and propose a crowd-sourced cooperative mobile application, CoSMiC. It helps parents find their missing child quickly on the spot before he or she completely disappears. A key idea lies in constructing the location history of a child via crowd participation, thereby leading parents to their child easily and quickly. We implement a prototype application and conduct extensive user studies to assess the design of the application and investigate its potential for practical use.
{"title":"CoSMiC: designing a mobile crowd-sourced collaborative application to find a missing child in situ","authors":"Hyojeong Shin, Taiwoo Park, Seungwoo Kang, Bupjae Lee, Junehwa Song, Yohan Chon, H. Cha","doi":"10.1145/2628363.2628400","DOIUrl":"https://doi.org/10.1145/2628363.2628400","url":null,"abstract":"Finding a missing child is an important problem concerning not only parents but also our society. It is essential and natural to use serendipitous clues from neighbors for finding a missing child. In this paper, we explore a new architecture of crowd collaboration to expedite this mission-critical process and propose a crowd-sourced cooperative mobile application, CoSMiC. It helps parents find their missing child quickly on the spot before he or she completely disappears. A key idea lies in constructing the location history of a child via crowd participation, thereby leading parents to their child easily and quickly. We implement a prototype application and conduct extensive user studies to assess the design of the application and investigate its potential for practical use.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"23 1","pages":"389-398"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80944529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elba del Carmen Valderrama Bahamóndez, Bastian Pfleging, N. Henze, A. Schmidt
Computing technology is currently adopted in emerging countries. Especially mobile phones and smart phones become widely used - with a much higher penetration than traditional computers. In our work we investigate how computing technologies and particularly mobile devices can support education. While previous work focused on controlled experiments, in this paper we present the results of a 20 weeks long study of mobile phone usage in an emerging region. Our aim was not only to investigate how the phones are used for education but also to learn how they are adopted by children in daily life. By logging screenshots, we used an unsupervised approach that allowed to unobtrusively observe usage patterns without the presence of researchers. Instead of offering tailored teaching applications, we used general-purpose applications to support teaching and found that the phone itself was an empowering technology similar to pen and paper. Based on a detailed analysis of actual use in a natural setting, we derived a set of typical use cases for mobile phones in education and describe how they change learning. From in-depth interviews with a teacher, selected guardians and pupils we show that introducing mobiles phones has great potential for supporting education in emerging regions.
{"title":"A long-term field study on the adoption of smartphones by children in panama","authors":"Elba del Carmen Valderrama Bahamóndez, Bastian Pfleging, N. Henze, A. Schmidt","doi":"10.1145/2628363.2628403","DOIUrl":"https://doi.org/10.1145/2628363.2628403","url":null,"abstract":"Computing technology is currently adopted in emerging countries. Especially mobile phones and smart phones become widely used - with a much higher penetration than traditional computers. In our work we investigate how computing technologies and particularly mobile devices can support education. While previous work focused on controlled experiments, in this paper we present the results of a 20 weeks long study of mobile phone usage in an emerging region. Our aim was not only to investigate how the phones are used for education but also to learn how they are adopted by children in daily life. By logging screenshots, we used an unsupervised approach that allowed to unobtrusively observe usage patterns without the presence of researchers. Instead of offering tailored teaching applications, we used general-purpose applications to support teaching and found that the phone itself was an empowering technology similar to pen and paper. Based on a detailed analysis of actual use in a natural setting, we derived a set of typical use cases for mobile phones in education and describe how they change learning. From in-depth interviews with a teacher, selected guardians and pupils we show that introducing mobiles phones has great potential for supporting education in emerging regions.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"13 1","pages":"163-172"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84256640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Body Editing is an interactive installation that combines depth sensing (Kinect 3D camera), biometric sensors, musical performance and abstract drawing software to create a mobile wireless interface that sonically and graphically represents the user's motion in space. The wireless nature of this gesture-controlled interface is an explicit attempt to create embodied experiences that encourages users to be more aware of their body through movement and audio/visual feedback and less focused on technological augmentation.
{"title":"Mobile experience lab: body editing","authors":"Stephen Surlin, Paula Gardner","doi":"10.1145/2628363.2633575","DOIUrl":"https://doi.org/10.1145/2628363.2633575","url":null,"abstract":"Body Editing is an interactive installation that combines depth sensing (Kinect 3D camera), biometric sensors, musical performance and abstract drawing software to create a mobile wireless interface that sonically and graphically represents the user's motion in space. The wireless nature of this gesture-controlled interface is an explicit attempt to create embodied experiences that encourages users to be more aware of their body through movement and audio/visual feedback and less focused on technological augmentation.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"73 1","pages":"439-442"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79592084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present how to tag images automatically based on the image and sensor data from a mobile device. We developed a system that computes low-level tags using the image itself and meta data. Based on these tags and previous user tags we learn high-level tags. With a client-server-implementation we source out computational expensive algorithms to recommend the tags as fast as possible. We show what are the best feature extraction methods in combination with a machine learning technique to recommend good tags.
{"title":"Keep an eye on your photos: automatic image tagging on mobile devices","authors":"Nina Runge, Dirk Wenig, R. Malaka","doi":"10.1145/2628363.2634225","DOIUrl":"https://doi.org/10.1145/2628363.2634225","url":null,"abstract":"In this paper we present how to tag images automatically based on the image and sensor data from a mobile device. We developed a system that computes low-level tags using the image itself and meta data. Based on these tags and previous user tags we learn high-level tags. With a client-server-implementation we source out computational expensive algorithms to recommend the tags as fast as possible. We show what are the best feature extraction methods in combination with a machine learning technique to recommend good tags.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"16 1","pages":"513-518"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82301198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phonorama is a sonar-like mobile application that uses audio hotspots bound to specific geo-locations. Decreasing the distance to the audio source increases the volume. The addition of direction indicators enable rich and immediate, immersive audio-only spatial explorations: The direction of the audio hotspot source relative to the user's location is represented by a realtime stereo-panning (Directional Stereophony). If the audio source is left of the user, the volume on the left headphone speaker is louder than the volume on the right speaker. While the software has been created for audio guides in the context of media arts, it might be useful for all kinds of acoustic navigations, e.g. information hotspots in museums and public space, navigational aid, assistive technologies, social networking or localized advertisements.
{"title":"Phonorama: mobile spatial navigation by directional stereophony","authors":"Michael Markert, Jens Heitjohann, Jens Geelhaar","doi":"10.1145/2628363.2645700","DOIUrl":"https://doi.org/10.1145/2628363.2645700","url":null,"abstract":"Phonorama is a sonar-like mobile application that uses audio hotspots bound to specific geo-locations. Decreasing the distance to the audio source increases the volume. The addition of direction indicators enable rich and immediate, immersive audio-only spatial explorations: The direction of the audio hotspot source relative to the user's location is represented by a realtime stereo-panning (Directional Stereophony). If the audio source is left of the user, the volume on the left headphone speaker is louder than the volume on the right speaker. While the software has been created for audio guides in the context of media arts, it might be useful for all kinds of acoustic navigations, e.g. information hotspots in museums and public space, navigational aid, assistive technologies, social networking or localized advertisements.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"13 1","pages":"609-611"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81978857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amahl Hazelton works at the convergence of art, event entertainment, architecture, urban design and digital technology. With a Masters degree in Urban Planning from McGill University, he is interested in new kinds of urban place-making defined as much by digital technology and experience as by physical form. Inspiring urban gatherings, the technology / city interfaces that he directs cross technical boundaries, integrating massive urban projections with smart handset based control systems. X-Agora by Moment Factory is a scalable, connected, real time media management and playback system that can integrate motion sensors, video screens, and projectors with smartphone sensors, multi media tools and data sources.
{"title":"Collective mobile interaction in urban spaces","authors":"Amahl Hazelton","doi":"10.1145/2628363.2634235","DOIUrl":"https://doi.org/10.1145/2628363.2634235","url":null,"abstract":"Amahl Hazelton works at the convergence of art, event entertainment, architecture, urban design and digital technology. With a Masters degree in Urban Planning from McGill University, he is interested in new kinds of urban place-making defined as much by digital technology and experience as by physical form. Inspiring urban gatherings, the technology / city interfaces that he directs cross technical boundaries, integrating massive urban projections with smart handset based control systems. X-Agora by Moment Factory is a scalable, connected, real time media management and playback system that can integrate motion sensors, video screens, and projectors with smartphone sensors, multi media tools and data sources.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"20 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88195640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wallace P. Lira, Renato Ferreira, C. D. Souza, S. R. Carvalho
This paper presents a case study aiming to investigate which variant of the Think-Aloud Protocol (i.e., the Concurrent Think-Aloud and the Retrospective Think-Aloud) better integrates with the Cognitive Walkthrough with Users. To this end we performed a case study that involved twelve users and one usability evaluator. Usability problems uncovered by each method were evaluated to help us understand the strengths and weaknesses of the studied usability testing methods. The results suggest that 1) the Cognitive Walkthrough with Users integrates equally well with both the Think-Aloud Protocol variants; 2) the Retrospective Think-Aloud find more usability problems and 3) the Concurrent Think-Aloud is slightly faster to perform and was more cost effective. However, this is only one case study, and further research is needed to verify if the results are actually statistically significant.
{"title":"Experimenting on the cognitive walkthrough with users","authors":"Wallace P. Lira, Renato Ferreira, C. D. Souza, S. R. Carvalho","doi":"10.1145/2628363.2628428","DOIUrl":"https://doi.org/10.1145/2628363.2628428","url":null,"abstract":"This paper presents a case study aiming to investigate which variant of the Think-Aloud Protocol (i.e., the Concurrent Think-Aloud and the Retrospective Think-Aloud) better integrates with the Cognitive Walkthrough with Users. To this end we performed a case study that involved twelve users and one usability evaluator. Usability problems uncovered by each method were evaluated to help us understand the strengths and weaknesses of the studied usability testing methods. The results suggest that 1) the Cognitive Walkthrough with Users integrates equally well with both the Think-Aloud Protocol variants; 2) the Retrospective Think-Aloud find more usability problems and 3) the Concurrent Think-Aloud is slightly faster to perform and was more cost effective. However, this is only one case study, and further research is needed to verify if the results are actually statistically significant.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"17 1","pages":"613-618"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88283161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Matsuno, Hung-Hsuan Huang, Yu Fang, K. Kawagoe
In this paper, a novel graphical user identifier scheme for web-based mobile services. Despite a rapid progress in biological authentication technologies for user identification, the traditional text-based UserID-and-password scheme has still widely been used even for mobile web-services. In the mobile usage, a virtual keyboard is difficult to use for input texts, which causes much time to input a text. To decrease the difficulty, GUIDES (Graphical User IDEntifier using Sketching) is proposed in this paper. With GUIDES, a user can input its user identifier, called GUID (Graphical User ID), by drawing a sketch on a mobile touchscreen device. From our experiments, it is concluded that GUIDES can enable a user input his/her User-ID more efficiently and remember it more, compared with the text-based user-id scheme.
{"title":"GUIDES: a graphical user identifier scheme using sketching for mobile web-services","authors":"Yusuke Matsuno, Hung-Hsuan Huang, Yu Fang, K. Kawagoe","doi":"10.1145/2628363.2634217","DOIUrl":"https://doi.org/10.1145/2628363.2634217","url":null,"abstract":"In this paper, a novel graphical user identifier scheme for web-based mobile services. Despite a rapid progress in biological authentication technologies for user identification, the traditional text-based UserID-and-password scheme has still widely been used even for mobile web-services. In the mobile usage, a virtual keyboard is difficult to use for input texts, which causes much time to input a text. To decrease the difficulty, GUIDES (Graphical User IDEntifier using Sketching) is proposed in this paper. With GUIDES, a user can input its user identifier, called GUID (Graphical User ID), by drawing a sketch on a mobile touchscreen device. From our experiments, it is concluded that GUIDES can enable a user input his/her User-ID more efficiently and remember it more, compared with the text-based user-id scheme.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"53 1","pages":"471-476"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90100277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}