Aske Mottelson, Christoffer Larsen, Mikkel Lyderik, Paul Strohmeier, Jarrod Knibbe
The small displays of smartwatches make text entry difficult and time consuming. While text entry rates can be increased, this continues to occur at the expense of available screen display space. Soft keyboards can easily use half the display space of tiny-screened devices. To combat this problem, we present Invisiboard: an invisible text entry method using the entire display for both text entry and display simultaneously. Invisiboard combines a numberpad-like layout with swipe gestures. This maximizes input target size, provides a familiar layout, and maximizes display space. Through this, Invisiboard achieves entry rates comparable or even faster than an existing research baseline. A user study with 12 participants writing 3264 words revealed an entry rate of 10.6 Words Per Minute (WPM) after 30 minutes, 7% faster than ZoomBoard. Furthermore, with nominal training, some participants demonstrated entry rates of over 30 WPM.
{"title":"Invisiboard: maximizing display and input space with a full screen text entry method for smartwatches","authors":"Aske Mottelson, Christoffer Larsen, Mikkel Lyderik, Paul Strohmeier, Jarrod Knibbe","doi":"10.1145/2935334.2935360","DOIUrl":"https://doi.org/10.1145/2935334.2935360","url":null,"abstract":"The small displays of smartwatches make text entry difficult and time consuming. While text entry rates can be increased, this continues to occur at the expense of available screen display space. Soft keyboards can easily use half the display space of tiny-screened devices. To combat this problem, we present Invisiboard: an invisible text entry method using the entire display for both text entry and display simultaneously. Invisiboard combines a numberpad-like layout with swipe gestures. This maximizes input target size, provides a familiar layout, and maximizes display space. Through this, Invisiboard achieves entry rates comparable or even faster than an existing research baseline. A user study with 12 participants writing 3264 words revealed an entry rate of 10.6 Words Per Minute (WPM) after 30 minutes, 7% faster than ZoomBoard. Furthermore, with nominal training, some participants demonstrated entry rates of over 30 WPM.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132494799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports on a study of AREEF, a multi-player Underwater Augmented Reality (UWAR) experience for swimming pools. Using off-the-shelf components combined with a custom made waterproof case and an innovative game concept, AREEF puts computer game technology to use for recreational and educational purposes in and under water. After an experience overview, we present evidence gained from a user-centred design-process including a pilot study with 3 kids and a final evaluation with 36 kids. Our discussion covers technical findings regarding marker placement, tracking, and device handling, as well as design related issues like virtual object placement and the need for extremely obvious user interaction and feedback when staging a mobile underwater experience.
{"title":"Playing on AREEF: evaluation of an underwater augmented reality game for kids","authors":"L. Oppermann, Lisa Blum, Marius Shekow","doi":"10.1145/2935334.2935368","DOIUrl":"https://doi.org/10.1145/2935334.2935368","url":null,"abstract":"This paper reports on a study of AREEF, a multi-player Underwater Augmented Reality (UWAR) experience for swimming pools. Using off-the-shelf components combined with a custom made waterproof case and an innovative game concept, AREEF puts computer game technology to use for recreational and educational purposes in and under water. After an experience overview, we present evidence gained from a user-centred design-process including a pilot study with 3 kids and a final evaluation with 36 kids. Our discussion covers technical findings regarding marker placement, tracking, and device handling, as well as design related issues like virtual object placement and the need for extremely obvious user interaction and feedback when staging a mobile underwater experience.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127731311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an interactive system to capture CAD-like 3D models of indoor scenes, on a mobile device. To overcome sensory and computational limitations of the mobile platform, we employ an in situ, semi-automated approach and harness the user's high-level knowledge of the scene to assist the reconstruction and modeling algorithms. The modeling proceeds in two stages: (1) The user captures the 3D shape and dimensions of the room. (2) The user then uses voice commands and an augmented reality sketching interface to insert objects of interest, such as furniture, artwork, doors and windows. Our system recognizes the sketches and add a corresponding 3D model into the scene at the appropriate location. The key contributions of this work are the design of a multi-modal user interface to effectively capture the user's semantic understanding of the scene and the underlying algorithms that process the input to produce useful reconstructions.
{"title":"In situ CAD capture","authors":"Aditya Sankar, S. Seitz","doi":"10.1145/2935334.2935337","DOIUrl":"https://doi.org/10.1145/2935334.2935337","url":null,"abstract":"We present an interactive system to capture CAD-like 3D models of indoor scenes, on a mobile device. To overcome sensory and computational limitations of the mobile platform, we employ an in situ, semi-automated approach and harness the user's high-level knowledge of the scene to assist the reconstruction and modeling algorithms. The modeling proceeds in two stages: (1) The user captures the 3D shape and dimensions of the room. (2) The user then uses voice commands and an augmented reality sketching interface to insert objects of interest, such as furniture, artwork, doors and windows. Our system recognizes the sketches and add a corresponding 3D model into the scene at the appropriate location. The key contributions of this work are the design of a multi-modal user interface to effectively capture the user's semantic understanding of the scene and the underlying algorithms that process the input to produce useful reconstructions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehwish Nasim, A. Rextin, Numair Khan, Muhammad Muddasir Malik
In this measurement study, we analyze whether mobile phone users exhibit temporal regularity in their mobile communication. To this end, we collected a mobile phone usage dataset from a developing country -- Pakistan. The data consists of 783 users and 229, 450 communication events. We found a number of interesting patterns both at the aggregate level and at dyadic level in the data. Some interesting results include: the number of calls to different alters consistently follow the rank-size rule; a communication event between an ego-alter(user-contact) pair greatly increases the chances of another communication event; certain ego-alter pairs tend to communicate more over weekends; ego-alter pairs exhibit autocorrelation in various time quantum. Identifying such idiosyncrasies in the ego-alter communication can help improve the calling experience of smartphone users by automatically (smartly) sorting the call log without any manual intervention.
{"title":"Understanding call logs of smartphone users for making future calls","authors":"Mehwish Nasim, A. Rextin, Numair Khan, Muhammad Muddasir Malik","doi":"10.1145/2935334.2935350","DOIUrl":"https://doi.org/10.1145/2935334.2935350","url":null,"abstract":"In this measurement study, we analyze whether mobile phone users exhibit temporal regularity in their mobile communication. To this end, we collected a mobile phone usage dataset from a developing country -- Pakistan. The data consists of 783 users and 229, 450 communication events. We found a number of interesting patterns both at the aggregate level and at dyadic level in the data. Some interesting results include: the number of calls to different alters consistently follow the rank-size rule; a communication event between an ego-alter(user-contact) pair greatly increases the chances of another communication event; certain ego-alter pairs tend to communicate more over weekends; ego-alter pairs exhibit autocorrelation in various time quantum. Identifying such idiosyncrasies in the ego-alter communication can help improve the calling experience of smartphone users by automatically (smartly) sorting the call log without any manual intervention.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Paternò, Kaisa Väänänen, K. Church, Jonna Häkkilä, A. Krüger, M. Serrano
MobileHCI brings together people from diverse backgrounds and areas of expertise to provide a truly multidisciplinary forum. Academics, hardware and software developers, designers and practitioners alike can discuss challenges encountered on different frontiers of mobility, as well as potential solutions that will advance the field. The conference covers both academic and industry research, ranging from fundamental interaction models and techniques to social and cultural aspects of everyday life with mobile devices and services.
{"title":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","authors":"F. Paternò, Kaisa Väänänen, K. Church, Jonna Häkkilä, A. Krüger, M. Serrano","doi":"10.1145/2935334","DOIUrl":"https://doi.org/10.1145/2935334","url":null,"abstract":"MobileHCI brings together people from diverse backgrounds and areas of expertise to provide a truly multidisciplinary forum. Academics, hardware and software developers, designers and practitioners alike can discuss challenges encountered on different frontiers of mobility, as well as potential solutions that will advance the field. The conference covers both academic and industry research, ranging from fundamental interaction models and techniques to social and cultural aspects of everyday life with mobile devices and services.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130006429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Games & learning","authors":"C. Santoro","doi":"10.1145/3254092","DOIUrl":"https://doi.org/10.1145/3254092","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, C. Asakawa
Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.
{"title":"NavCog: a navigational cognitive assistant for the blind","authors":"D. Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, C. Asakawa","doi":"10.1145/2935334.2935361","DOIUrl":"https://doi.org/10.1145/2935334.2935361","url":null,"abstract":"Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120893281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Supporting visual impairment","authors":"F. Paternò","doi":"10.1145/3254086","DOIUrl":"https://doi.org/10.1145/3254086","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124142642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Wrist and hand interaction II","authors":"Luis A. Leiva","doi":"10.1145/3254094","DOIUrl":"https://doi.org/10.1145/3254094","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128223426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A majority of Stroke survivors have an arm impairment (up to 80%), which persists over the long term (>12 months). Physiotherapy experts believe that a rehabilitation Aide-Memoire could help these patients [25]. Hence, we designed, with the input of physiotherapists, Stroke experts and former Stroke patients, the Aide-Memoire Stroke (AIMS) App to help them remember to exercise more frequently. We evaluated its use in a controlled field evaluation on a smartphone, tablet and smartwatch. Since one of the main features of the app is to remind Stroke survivors to exercise we also investigated reminder modalities (i.e., visual, vibrate, audio, speech). One key finding is that Stroke survivors opted for a combination of modalities to remind them to conduct their exercises. Also, Stroke survivors seem to prefer smartphones compared to other mobile devices due to their ease of use, usability, familiarity and being easier to handle with one arm.
{"title":"Time to exercise!: an aide-memoire stroke app for post-stroke arm rehabilitation","authors":"Nicholas Micallef, L. Baillie, Stephen Uzor","doi":"10.1145/2935334.2935338","DOIUrl":"https://doi.org/10.1145/2935334.2935338","url":null,"abstract":"A majority of Stroke survivors have an arm impairment (up to 80%), which persists over the long term (>12 months). Physiotherapy experts believe that a rehabilitation Aide-Memoire could help these patients [25]. Hence, we designed, with the input of physiotherapists, Stroke experts and former Stroke patients, the Aide-Memoire Stroke (AIMS) App to help them remember to exercise more frequently. We evaluated its use in a controlled field evaluation on a smartphone, tablet and smartwatch. Since one of the main features of the app is to remind Stroke survivors to exercise we also investigated reminder modalities (i.e., visual, vibrate, audio, speech). One key finding is that Stroke survivors opted for a combination of modalities to remind them to conduct their exercises. Also, Stroke survivors seem to prefer smartphones compared to other mobile devices due to their ease of use, usability, familiarity and being easier to handle with one arm.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115433629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}