Jörg Müller, G. Bailly, Thor Bossuyt, Niklas Hillgren
In this paper we present a series of three field studies on the integration of multiple modalities (touch and mid-air gestures) in a public display. We analyze our field studies using Conversion Diagrams, an approach to model and evaluate usage of multimodal public displays. Conversion diagrams highlight the transitions inherent in a multimodal system and provide a systematic approach to investigate which factors affect them and how. We present a semi-automatic annotation technique to obtain Conversion Diagrams. We use Conversion Diagrams to evaluate interaction in the three field studies. We found that 1) clear affordances for touch were necessary when mid-air gestures were present. A call-to-action caused significantly more users to touch than a button (+200%), 2) the order of modality usage was different from what we designed for, and the location impacted which modality was used first, and 3) small variations in the application did lead to considerable user increase (+290%).
{"title":"MirrorTouch: combining touch and mid-air gestures for public displays","authors":"Jörg Müller, G. Bailly, Thor Bossuyt, Niklas Hillgren","doi":"10.1145/2628363.2628379","DOIUrl":"https://doi.org/10.1145/2628363.2628379","url":null,"abstract":"In this paper we present a series of three field studies on the integration of multiple modalities (touch and mid-air gestures) in a public display. We analyze our field studies using Conversion Diagrams, an approach to model and evaluate usage of multimodal public displays. Conversion diagrams highlight the transitions inherent in a multimodal system and provide a systematic approach to investigate which factors affect them and how. We present a semi-automatic annotation technique to obtain Conversion Diagrams. We use Conversion Diagrams to evaluate interaction in the three field studies. We found that 1) clear affordances for touch were necessary when mid-air gestures were present. A call-to-action caused significantly more users to touch than a button (+200%), 2) the order of modality usage was different from what we designed for, and the location impacted which modality was used first, and 3) small variations in the application did lead to considerable user increase (+290%).","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"83 1","pages":"319-328"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81093792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present how to tag images automatically based on the image and sensor data from a mobile device. We developed a system that computes low-level tags using the image itself and meta data. Based on these tags and previous user tags we learn high-level tags. With a client-server-implementation we source out computational expensive algorithms to recommend the tags as fast as possible. We show what are the best feature extraction methods in combination with a machine learning technique to recommend good tags.
{"title":"Keep an eye on your photos: automatic image tagging on mobile devices","authors":"Nina Runge, Dirk Wenig, R. Malaka","doi":"10.1145/2628363.2634225","DOIUrl":"https://doi.org/10.1145/2628363.2634225","url":null,"abstract":"In this paper we present how to tag images automatically based on the image and sensor data from a mobile device. We developed a system that computes low-level tags using the image itself and meta data. Based on these tags and previous user tags we learn high-level tags. With a client-server-implementation we source out computational expensive algorithms to recommend the tags as fast as possible. We show what are the best feature extraction methods in combination with a machine learning technique to recommend good tags.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"16 1","pages":"513-518"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82301198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gestures above a mobile phone would let users interact with their devices quickly and easily from a distance. While both researchers and smartphone manufacturers develop new gesture sensing technologies, little is known about how best to design these gestures and interaction techniques. Our research looks at creating usable and socially acceptable above-device interaction techniques. We present an initial gesture collection, a preliminary evaluation of these gestures and some design recommendations. Our findings identify interesting areas for future research and will help designers create better gesture interfaces.
{"title":"Towards usable and acceptable above-device interactions","authors":"Euan Freeman, S. Brewster, V. Lantz","doi":"10.1145/2628363.2634215","DOIUrl":"https://doi.org/10.1145/2628363.2634215","url":null,"abstract":"Gestures above a mobile phone would let users interact with their devices quickly and easily from a distance. While both researchers and smartphone manufacturers develop new gesture sensing technologies, little is known about how best to design these gestures and interaction techniques. Our research looks at creating usable and socially acceptable above-device interaction techniques. We present an initial gesture collection, a preliminary evaluation of these gestures and some design recommendations. Our findings identify interesting areas for future research and will help designers create better gesture interfaces.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"40 1","pages":"459-464"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86118280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phonorama is a sonar-like mobile application that uses audio hotspots bound to specific geo-locations. Decreasing the distance to the audio source increases the volume. The addition of direction indicators enable rich and immediate, immersive audio-only spatial explorations: The direction of the audio hotspot source relative to the user's location is represented by a realtime stereo-panning (Directional Stereophony). If the audio source is left of the user, the volume on the left headphone speaker is louder than the volume on the right speaker. While the software has been created for audio guides in the context of media arts, it might be useful for all kinds of acoustic navigations, e.g. information hotspots in museums and public space, navigational aid, assistive technologies, social networking or localized advertisements.
{"title":"Phonorama: mobile spatial navigation by directional stereophony","authors":"Michael Markert, Jens Heitjohann, Jens Geelhaar","doi":"10.1145/2628363.2645700","DOIUrl":"https://doi.org/10.1145/2628363.2645700","url":null,"abstract":"Phonorama is a sonar-like mobile application that uses audio hotspots bound to specific geo-locations. Decreasing the distance to the audio source increases the volume. The addition of direction indicators enable rich and immediate, immersive audio-only spatial explorations: The direction of the audio hotspot source relative to the user's location is represented by a realtime stereo-panning (Directional Stereophony). If the audio source is left of the user, the volume on the left headphone speaker is louder than the volume on the right speaker. While the software has been created for audio guides in the context of media arts, it might be useful for all kinds of acoustic navigations, e.g. information hotspots in museums and public space, navigational aid, assistive technologies, social networking or localized advertisements.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"13 1","pages":"609-611"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81978857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amahl Hazelton works at the convergence of art, event entertainment, architecture, urban design and digital technology. With a Masters degree in Urban Planning from McGill University, he is interested in new kinds of urban place-making defined as much by digital technology and experience as by physical form. Inspiring urban gatherings, the technology / city interfaces that he directs cross technical boundaries, integrating massive urban projections with smart handset based control systems. X-Agora by Moment Factory is a scalable, connected, real time media management and playback system that can integrate motion sensors, video screens, and projectors with smartphone sensors, multi media tools and data sources.
{"title":"Collective mobile interaction in urban spaces","authors":"Amahl Hazelton","doi":"10.1145/2628363.2634235","DOIUrl":"https://doi.org/10.1145/2628363.2634235","url":null,"abstract":"Amahl Hazelton works at the convergence of art, event entertainment, architecture, urban design and digital technology. With a Masters degree in Urban Planning from McGill University, he is interested in new kinds of urban place-making defined as much by digital technology and experience as by physical form. Inspiring urban gatherings, the technology / city interfaces that he directs cross technical boundaries, integrating massive urban projections with smart handset based control systems. X-Agora by Moment Factory is a scalable, connected, real time media management and playback system that can integrate motion sensors, video screens, and projectors with smartphone sensors, multi media tools and data sources.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"20 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88195640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremy P. Birnholtz, Colin Fitzpatrick, M. Handel, Jed R. Brubaker
Location-aware mobile applications have become extremely common, with a recent wave of mobile dating applications that provide relatively sparse profiles to connect nearby individuals who may not know each other for immediate social or sexual encounters. These applications have become particularly popular among men who have sex with men (MSM) and raise a range of questions about self-presentation, visibility to others, and impression formation, as traditional geographic boundaries and social circles are crossed. In this paper we address two key questions around how people manage potentially stigmatized identities in using these apps and what types of information they use to self-present in the absence of a detailed profile or rich social cues. To do so, we draw on profile data observed in twelve locations on Grindr, a location-aware social application for MSM. Results suggest clear use of language to manage stigma associated with casual sex, and that users draw regularly on location information and other descriptive language to present concisely to others nearby.
{"title":"Identity, identification and identifiability: the language of self-presentation on a location-based mobile dating app","authors":"Jeremy P. Birnholtz, Colin Fitzpatrick, M. Handel, Jed R. Brubaker","doi":"10.1145/2628363.2628406","DOIUrl":"https://doi.org/10.1145/2628363.2628406","url":null,"abstract":"Location-aware mobile applications have become extremely common, with a recent wave of mobile dating applications that provide relatively sparse profiles to connect nearby individuals who may not know each other for immediate social or sexual encounters. These applications have become particularly popular among men who have sex with men (MSM) and raise a range of questions about self-presentation, visibility to others, and impression formation, as traditional geographic boundaries and social circles are crossed. In this paper we address two key questions around how people manage potentially stigmatized identities in using these apps and what types of information they use to self-present in the absence of a detailed profile or rich social cues. To do so, we draw on profile data observed in twelve locations on Grindr, a location-aware social application for MSM. Results suggest clear use of language to manage stigma associated with casual sex, and that users draw regularly on location information and other descriptive language to present concisely to others nearby.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"6 1","pages":"3-12"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84968386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wallace P. Lira, Renato Ferreira, C. D. Souza, S. R. Carvalho
This paper presents a case study aiming to investigate which variant of the Think-Aloud Protocol (i.e., the Concurrent Think-Aloud and the Retrospective Think-Aloud) better integrates with the Cognitive Walkthrough with Users. To this end we performed a case study that involved twelve users and one usability evaluator. Usability problems uncovered by each method were evaluated to help us understand the strengths and weaknesses of the studied usability testing methods. The results suggest that 1) the Cognitive Walkthrough with Users integrates equally well with both the Think-Aloud Protocol variants; 2) the Retrospective Think-Aloud find more usability problems and 3) the Concurrent Think-Aloud is slightly faster to perform and was more cost effective. However, this is only one case study, and further research is needed to verify if the results are actually statistically significant.
{"title":"Experimenting on the cognitive walkthrough with users","authors":"Wallace P. Lira, Renato Ferreira, C. D. Souza, S. R. Carvalho","doi":"10.1145/2628363.2628428","DOIUrl":"https://doi.org/10.1145/2628363.2628428","url":null,"abstract":"This paper presents a case study aiming to investigate which variant of the Think-Aloud Protocol (i.e., the Concurrent Think-Aloud and the Retrospective Think-Aloud) better integrates with the Cognitive Walkthrough with Users. To this end we performed a case study that involved twelve users and one usability evaluator. Usability problems uncovered by each method were evaluated to help us understand the strengths and weaknesses of the studied usability testing methods. The results suggest that 1) the Cognitive Walkthrough with Users integrates equally well with both the Think-Aloud Protocol variants; 2) the Retrospective Think-Aloud find more usability problems and 3) the Concurrent Think-Aloud is slightly faster to perform and was more cost effective. However, this is only one case study, and further research is needed to verify if the results are actually statistically significant.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"17 1","pages":"613-618"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88283161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elba del Carmen Valderrama Bahamóndez, Bastian Pfleging, N. Henze, A. Schmidt
Computing technology is currently adopted in emerging countries. Especially mobile phones and smart phones become widely used - with a much higher penetration than traditional computers. In our work we investigate how computing technologies and particularly mobile devices can support education. While previous work focused on controlled experiments, in this paper we present the results of a 20 weeks long study of mobile phone usage in an emerging region. Our aim was not only to investigate how the phones are used for education but also to learn how they are adopted by children in daily life. By logging screenshots, we used an unsupervised approach that allowed to unobtrusively observe usage patterns without the presence of researchers. Instead of offering tailored teaching applications, we used general-purpose applications to support teaching and found that the phone itself was an empowering technology similar to pen and paper. Based on a detailed analysis of actual use in a natural setting, we derived a set of typical use cases for mobile phones in education and describe how they change learning. From in-depth interviews with a teacher, selected guardians and pupils we show that introducing mobiles phones has great potential for supporting education in emerging regions.
{"title":"A long-term field study on the adoption of smartphones by children in panama","authors":"Elba del Carmen Valderrama Bahamóndez, Bastian Pfleging, N. Henze, A. Schmidt","doi":"10.1145/2628363.2628403","DOIUrl":"https://doi.org/10.1145/2628363.2628403","url":null,"abstract":"Computing technology is currently adopted in emerging countries. Especially mobile phones and smart phones become widely used - with a much higher penetration than traditional computers. In our work we investigate how computing technologies and particularly mobile devices can support education. While previous work focused on controlled experiments, in this paper we present the results of a 20 weeks long study of mobile phone usage in an emerging region. Our aim was not only to investigate how the phones are used for education but also to learn how they are adopted by children in daily life. By logging screenshots, we used an unsupervised approach that allowed to unobtrusively observe usage patterns without the presence of researchers. Instead of offering tailored teaching applications, we used general-purpose applications to support teaching and found that the phone itself was an empowering technology similar to pen and paper. Based on a detailed analysis of actual use in a natural setting, we derived a set of typical use cases for mobile phones in education and describe how they change learning. From in-depth interviews with a teacher, selected guardians and pupils we show that introducing mobiles phones has great potential for supporting education in emerging regions.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"13 1","pages":"163-172"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84256640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yusuke Matsuno, Hung-Hsuan Huang, Yu Fang, K. Kawagoe
In this paper, a novel graphical user identifier scheme for web-based mobile services. Despite a rapid progress in biological authentication technologies for user identification, the traditional text-based UserID-and-password scheme has still widely been used even for mobile web-services. In the mobile usage, a virtual keyboard is difficult to use for input texts, which causes much time to input a text. To decrease the difficulty, GUIDES (Graphical User IDEntifier using Sketching) is proposed in this paper. With GUIDES, a user can input its user identifier, called GUID (Graphical User ID), by drawing a sketch on a mobile touchscreen device. From our experiments, it is concluded that GUIDES can enable a user input his/her User-ID more efficiently and remember it more, compared with the text-based user-id scheme.
{"title":"GUIDES: a graphical user identifier scheme using sketching for mobile web-services","authors":"Yusuke Matsuno, Hung-Hsuan Huang, Yu Fang, K. Kawagoe","doi":"10.1145/2628363.2634217","DOIUrl":"https://doi.org/10.1145/2628363.2634217","url":null,"abstract":"In this paper, a novel graphical user identifier scheme for web-based mobile services. Despite a rapid progress in biological authentication technologies for user identification, the traditional text-based UserID-and-password scheme has still widely been used even for mobile web-services. In the mobile usage, a virtual keyboard is difficult to use for input texts, which causes much time to input a text. To decrease the difficulty, GUIDES (Graphical User IDEntifier using Sketching) is proposed in this paper. With GUIDES, a user can input its user identifier, called GUID (Graphical User ID), by drawing a sketch on a mobile touchscreen device. From our experiments, it is concluded that GUIDES can enable a user input his/her User-ID more efficiently and remember it more, compared with the text-based user-id scheme.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"53 1","pages":"471-476"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90100277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previous work in the literature has shown that back-of-device (BoD) authentication is significantly more secure than standard front-facing approaches. However, the only BoD method available to date (Bod Shapes) is difficult to perform, especially with one hand. In this paper we propose Bod Taps, a novel approach that simplifies BoD authentication while improving its usage. A controlled evaluation with 12 users revealed that Bod Taps and Bod Shapes perform equally good at unlocking the device, but Bod Taps allows users to enter passwords about twice faster than Bod Shapes. Moreover, Bod Taps is perceived as being more usable and less frustrating than Bod Shapes, either using one or two hands.
{"title":"BoD taps: an improved back-of-device authentication technique on smartphones","authors":"Luis A. Leiva, A. Catalá","doi":"10.1145/2628363.2628372","DOIUrl":"https://doi.org/10.1145/2628363.2628372","url":null,"abstract":"Previous work in the literature has shown that back-of-device (BoD) authentication is significantly more secure than standard front-facing approaches. However, the only BoD method available to date (Bod Shapes) is difficult to perform, especially with one hand. In this paper we propose Bod Taps, a novel approach that simplifies BoD authentication while improving its usage. A controlled evaluation with 12 users revealed that Bod Taps and Bod Shapes perform equally good at unlocking the device, but Bod Taps allows users to enter passwords about twice faster than Bod Shapes. Moreover, Bod Taps is perceived as being more usable and less frustrating than Bod Shapes, either using one or two hands.","PeriodicalId":74207,"journal":{"name":"MobileHCI : proceedings of the ... International Conference on Human Computer Interaction with Mobile Devices and Services. MobileHCI (Conference)","volume":"5 1","pages":"63-66"},"PeriodicalIF":0.0,"publicationDate":"2014-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86190253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}