Brianna J. Tomlinson, Jonathan H. Schuett, Woodbury Shortridge, Jehoshaph Chandran, B. Walker
As ubiquitous as weather is in our daily lives, individuals with vision impairments endure poorly designed user experiences when attempting to check the weather on their mobile devices. This is primarily caused by a mismatch between the visually based information layout on screen and the order in which a screen reader, such as TalkBack or VoiceOver, presents the information to users with visual impairments. Additionally, any image or icon included on the screen presents no information to the user if they are not able to see it. Therefore we created the Accessible Weather App to run on Android and integrate with the TalkBack accessibility feature that is already available on the operating system. We also included a set of auditory weather icons which use sound, rather than visuals, to convey current weather conditions to users in a fast and pleasant way. This paper discusses the process for determining what features the users' would want and require, as well as our methodology for evaluating the beta version of our app.
{"title":"Talkin' about the weather: incorporating TalkBack functionality and sonifications for accessible app design","authors":"Brianna J. Tomlinson, Jonathan H. Schuett, Woodbury Shortridge, Jehoshaph Chandran, B. Walker","doi":"10.1145/2935334.2935390","DOIUrl":"https://doi.org/10.1145/2935334.2935390","url":null,"abstract":"As ubiquitous as weather is in our daily lives, individuals with vision impairments endure poorly designed user experiences when attempting to check the weather on their mobile devices. This is primarily caused by a mismatch between the visually based information layout on screen and the order in which a screen reader, such as TalkBack or VoiceOver, presents the information to users with visual impairments. Additionally, any image or icon included on the screen presents no information to the user if they are not able to see it. Therefore we created the Accessible Weather App to run on Android and integrate with the TalkBack accessibility feature that is already available on the operating system. We also included a set of auditory weather icons which use sound, rather than visuals, to convey current weather conditions to users in a fast and pleasant way. This paper discusses the process for determining what features the users' would want and require, as well as our methodology for evaluating the beta version of our app.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131741928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashley Colley, Wouter Van Vlaenderen, Johannes Schöning, Jonna Häkkilä
Mobile devices are currently the most commonly used platform to experience Augmented Reality (AR). Nevertheless, they typically provide a less than ideal ergonomic experience, requiring the user to operate them with arms raised. In this paper we evaluate how to improve the ergonomics of AR experiences by modifying the angle between the mobile device's camera and its display. Whereas current mobile device cameras point out vertically from the back cover, we modify the camera angle to be 0, 45 and 90 degrees. In addition, we also investigate the use of the smartwatch as an AR browser form factor. Key findings are, that whilst the current approximately see-through configuration provides the fastest task completion times, a camera offset angle of 45° provides reduced task load and was preferred by users. When comparing different form factors and screen sizes, the smartwatch format was found to be unsuitable for AR browsing use.
{"title":"Changing the camera-to-screen angle to improve AR browser usage","authors":"Ashley Colley, Wouter Van Vlaenderen, Johannes Schöning, Jonna Häkkilä","doi":"10.1145/2935334.2935384","DOIUrl":"https://doi.org/10.1145/2935334.2935384","url":null,"abstract":"Mobile devices are currently the most commonly used platform to experience Augmented Reality (AR). Nevertheless, they typically provide a less than ideal ergonomic experience, requiring the user to operate them with arms raised. In this paper we evaluate how to improve the ergonomics of AR experiences by modifying the angle between the mobile device's camera and its display. Whereas current mobile device cameras point out vertically from the back cover, we modify the camera angle to be 0, 45 and 90 degrees. In addition, we also investigate the use of the smartwatch as an AR browser form factor. Key findings are, that whilst the current approximately see-through configuration provides the fastest task completion times, a camera offset angle of 45° provides reduced task load and was preferred by users. When comparing different form factors and screen sizes, the smartwatch format was found to be unsuitable for AR browsing use.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122903384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lilit Hakobyan, J. Lumsden, R. Shaw, D. O’Sullivan
Ongoing advances in technology are increasing the scope for enhancing and supporting older adults' daily living. The digital divide between older and younger adults raises concerns, however, about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) as a case study, we used user-centred and participatory design approaches to develop an assistive mobile app for self-monitoring their intake of food [12,13]. In this paper we report on findings of a longitudinal field evaluation of our app that was conducted to investigate how it was received and adopted by older adults with AMD and its impact on their lives. Demonstrating the benefit of applying inclusive design methods for technology for older adults, our findings reveal how the use of the app raises participants' awareness and facilitates self-monitoring of diet, encourages positive (diet) behaviour change, and encourages learning.
{"title":"A longitudinal evaluation of the acceptability and impact of a diet diary app for older adults with age-related macular degeneration","authors":"Lilit Hakobyan, J. Lumsden, R. Shaw, D. O’Sullivan","doi":"10.1145/2935334.2935356","DOIUrl":"https://doi.org/10.1145/2935334.2935356","url":null,"abstract":"Ongoing advances in technology are increasing the scope for enhancing and supporting older adults' daily living. The digital divide between older and younger adults raises concerns, however, about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) as a case study, we used user-centred and participatory design approaches to develop an assistive mobile app for self-monitoring their intake of food [12,13]. In this paper we report on findings of a longitudinal field evaluation of our app that was conducted to investigate how it was received and adopted by older adults with AMD and its impact on their lives. Demonstrating the benefit of applying inclusive design methods for technology for older adults, our findings reveal how the use of the app raises participants' awareness and facilitates self-monitoring of diet, encourages positive (diet) behaviour change, and encourages learning.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114133853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.
{"title":"Sender-intended functions of emojis in US messaging","authors":"H. Cramer, Paloma de Juan, Joel R. Tetreault","doi":"10.1145/2935334.2935370","DOIUrl":"https://doi.org/10.1145/2935334.2935370","url":null,"abstract":"Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124300824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With current digital cameras and smartphones, taking photos and videos has never been easier. However, it is still difficult to take a photo of a brief action at the right time. In addition, editing captured videos, such as modifying the playback speed of some parts of a video, remains a time consuming task. In this work we investigate how the motion sensors embedded in mobile devices, such as smartphones, can facilitate camera control. In particular, we show two families of applications: automatic camera trigger control for jump photos and automatic playback speed control (video speed ramping) for action videos. Our approach uses joint devices: a remote camera takes a photo or a video of the scene and it is controlled by the motion sensor of a mobile device, either during or after recording. This allows casual users to achieve visually appealing effects with little effort, even for self portraits.
{"title":"Motion based remote camera control with mobile devices","authors":"Sabir Akhadov, M. Lancelle, J. Bazin, M. Gross","doi":"10.1145/2935334.2935372","DOIUrl":"https://doi.org/10.1145/2935334.2935372","url":null,"abstract":"With current digital cameras and smartphones, taking photos and videos has never been easier. However, it is still difficult to take a photo of a brief action at the right time. In addition, editing captured videos, such as modifying the playback speed of some parts of a video, remains a time consuming task. In this work we investigate how the motion sensors embedded in mobile devices, such as smartphones, can facilitate camera control. In particular, we show two families of applications: automatic camera trigger control for jump photos and automatic playback speed control (video speed ramping) for action videos. Our approach uses joint devices: a remote camera takes a photo or a video of the scene and it is controlled by the motion sensor of a mobile device, either during or after recording. This allows casual users to achieve visually appealing effects with little effort, even for self portraits.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128886471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While several techniques offer more than one detailed view in Overview+Detail (O+D) interfaces, the optimal number of detailed views has not been investigated. But the answer is not trivial: using a single detailed view offers a larger display size but only allows a sequential exploration of the overview; using several detailed views reduces the size of each view but allows a parallel exploration of the overview. In this paper we investigate the benefits of splitting the detailed view in O+D interfaces for working with very large graphs. We implemented an O+D interface where the overview is displayed on a large screen while 1, 2 or 4 split views are displayed on a tactile tablet. We experimentally evaluated the effect of the number of split views according to the number of nodes to connect. Using 4 split views is better than 1 and 2 for working on more than 2 nodes.
{"title":"Investigating the effects of splitting detailed views in Overview+Detail interfaces","authors":"Houssem Saidi, M. Serrano, E. Dubois","doi":"10.1145/2935334.2935341","DOIUrl":"https://doi.org/10.1145/2935334.2935341","url":null,"abstract":"While several techniques offer more than one detailed view in Overview+Detail (O+D) interfaces, the optimal number of detailed views has not been investigated. But the answer is not trivial: using a single detailed view offers a larger display size but only allows a sequential exploration of the overview; using several detailed views reduces the size of each view but allows a parallel exploration of the overview. In this paper we investigate the benefits of splitting the detailed view in O+D interfaces for working with very large graphs. We implemented an O+D interface where the overview is displayed on a large screen while 1, 2 or 4 split views are displayed on a tactile tablet. We experimentally evaluated the effect of the number of split views according to the number of nodes to connect. Using 4 split views is better than 1 and 2 for working on more than 2 nodes.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128649130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Paay, J. Kjeldskov, M. Skov, Per M. Nielsen, Jon M. Pearce
Discovering activities in the city around you can be difficult with traditional search engines unless you know what you are looking for. Searching for inspiration on things to do requires a more open-ended and explorative approach. We introduce transitory search as a dynamic way of uncovering information about activities in the city around you that allows the user to start from a vague idea of what they are interested in, and iteratively modify their search using slider continuums to discover best-fit results. We present the design of a smartphone app exemplifying the idea of transitory search and give results from a lab evaluation and a 4-week field deployment involving 15 people in two different cities. Our findings indicate that transitory search on a mobile device both supports discovering activities in the city and more interestingly helps users reflect on and shape their preferences in situ. We also found that ambiguous slider continuums work well as people happily form and refine individual interpretations of them.
{"title":"Discovering activities in your city using transitory search","authors":"J. Paay, J. Kjeldskov, M. Skov, Per M. Nielsen, Jon M. Pearce","doi":"10.1145/2935334.2935378","DOIUrl":"https://doi.org/10.1145/2935334.2935378","url":null,"abstract":"Discovering activities in the city around you can be difficult with traditional search engines unless you know what you are looking for. Searching for inspiration on things to do requires a more open-ended and explorative approach. We introduce transitory search as a dynamic way of uncovering information about activities in the city around you that allows the user to start from a vague idea of what they are interested in, and iteratively modify their search using slider continuums to discover best-fit results. We present the design of a smartphone app exemplifying the idea of transitory search and give results from a lab evaluation and a 4-week field deployment involving 15 people in two different cities. Our findings indicate that transitory search on a mobile device both supports discovering activities in the city and more interestingly helps users reflect on and shape their preferences in situ. We also found that ambiguous slider continuums work well as people happily form and refine individual interpretations of them.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124655407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Ashbrook, Carlos E. Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, S. Gajendra, R. Tallents
We present Bitey, a subtle, wearable device for enabling input via tooth clicks. Based on a bone-conduction microphone worn just above the ears, Bitey recognizes the click sounds from up to five different pairs of teeth, allowing fully hands-free interface control. We explore the space of tooth input and show that Bitey allows for a high degree of accuracy in distinguishing between different tooth clicks, with up to 94% accuracy under laboratory conditions for five different tooth pairs. Finally, we illustrate Bitey's potential through two demonstration applications: a list navigation and selection interface and a keyboard input method.
{"title":"Bitey: an exploration of tooth click gestures for hands-free user interface control","authors":"Daniel Ashbrook, Carlos E. Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, S. Gajendra, R. Tallents","doi":"10.1145/2935334.2935389","DOIUrl":"https://doi.org/10.1145/2935334.2935389","url":null,"abstract":"We present Bitey, a subtle, wearable device for enabling input via tooth clicks. Based on a bone-conduction microphone worn just above the ears, Bitey recognizes the click sounds from up to five different pairs of teeth, allowing fully hands-free interface control. We explore the space of tooth input and show that Bitey allows for a high degree of accuracy in distinguishing between different tooth clicks, with up to 94% accuracy under laboratory conditions for five different tooth pairs. Finally, we illustrate Bitey's potential through two demonstration applications: a list navigation and selection interface and a keyboard input method.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121123117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As wearable devices become more popular, situations where there are multiple persons present with such devices will become commonplace. In these situations, wearable devices could support collaborative tasks and experiences between co-located persons through multi-user applications. We present an elicitation study that gathers from end users interaction methods for wearable devices for two common tasks in co-located interaction: group binding and cross-display object movement. We report a total of 154 methods collected from 30 participants. We categorize the methods based on the metaphor and modality of interaction, and discuss the strengths and weaknesses of each category based on qualitative and quantitative feedback given by the participants.
{"title":"Natural group binding and cross-display object movement methods for wearable devices","authors":"T. Jokela, Parisa Pour Rezaei, Kaisa Väänänen","doi":"10.1145/2935334.2935346","DOIUrl":"https://doi.org/10.1145/2935334.2935346","url":null,"abstract":"As wearable devices become more popular, situations where there are multiple persons present with such devices will become commonplace. In these situations, wearable devices could support collaborative tasks and experiences between co-located persons through multi-user applications. We present an elicitation study that gathers from end users interaction methods for wearable devices for two common tasks in co-located interaction: group binding and cross-display object movement. We report a total of 154 methods collected from 30 participants. We categorize the methods based on the metaphor and modality of interaction, and discuss the strengths and weaknesses of each category based on qualitative and quantitative feedback given by the participants.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126435343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dirk Wenig, A. Steenbergen, Johannes Schöning, Brent J. Hecht, R. Malaka
Providing pedestrian navigation instructions on small screens is a challenging task due to limited screen space. As image-based approaches for navigation have been successfully proven to outperform map-based navigation on mobile devices, we propose to bring image-based navigation to smartwatches. We contribute a straightforward pipeline to easily create image-based indoor navigation instructions that allow users to freely navigate in indoor environments without any localization infrastructure and with minimal user input on the smartwatch. In a user study, we show that our approach outperforms the current state-of-the art application in terms of task completion time, perceived task load and perceived usability. In addition, we did not find an indication that there is a need to provide explicit directional instructions for image-based navigation on small screens.
{"title":"ScrollingHome: bringing image-based indoor navigation to smartwatches","authors":"Dirk Wenig, A. Steenbergen, Johannes Schöning, Brent J. Hecht, R. Malaka","doi":"10.1145/2935334.2935373","DOIUrl":"https://doi.org/10.1145/2935334.2935373","url":null,"abstract":"Providing pedestrian navigation instructions on small screens is a challenging task due to limited screen space. As image-based approaches for navigation have been successfully proven to outperform map-based navigation on mobile devices, we propose to bring image-based navigation to smartwatches. We contribute a straightforward pipeline to easily create image-based indoor navigation instructions that allow users to freely navigate in indoor environments without any localization infrastructure and with minimal user input on the smartwatch. In a user study, we show that our approach outperforms the current state-of-the art application in terms of task completion time, perceived task load and perceived usability. In addition, we did not find an indication that there is a need to provide explicit directional instructions for image-based navigation on small screens.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125745320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}