We present Gesture Morpher, a tool for prototyping and testing multi-touch interactions based on video recordings of target application behaviors, e.g., a sequence of screenshots recorded by a screen capture tool. Gesture Morpher extracts continuous behaviors from video recordings, such as transformations of UI content, and suggests a set of multi-touch interactions that are suitable for achieving these behaviors. Designers can easily test different interactions on a touch device with visual response that is automatically synthesized from the video recording, all without any programming. We discuss a range of multi-touch interaction scenarios Gesture Morpher supports, our method for extracting continuous interaction behaviors from video recordings, and techniques for associating touch-input with the output effect extracted from the videos.
{"title":"Gesture morpher: video-based retargeting of multi-touch interactions","authors":"Ramik Sadana, Y. Li","doi":"10.1145/2935334.2935391","DOIUrl":"https://doi.org/10.1145/2935334.2935391","url":null,"abstract":"We present Gesture Morpher, a tool for prototyping and testing multi-touch interactions based on video recordings of target application behaviors, e.g., a sequence of screenshots recorded by a screen capture tool. Gesture Morpher extracts continuous behaviors from video recordings, such as transformations of UI content, and suggests a set of multi-touch interactions that are suitable for achieving these behaviors. Designers can easily test different interactions on a touch device with visual response that is automatically synthesized from the video recording, all without any programming. We discuss a range of multi-touch interaction scenarios Gesture Morpher supports, our method for extracting continuous interaction behaviors from video recordings, and techniques for associating touch-input with the output effect extracted from the videos.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132520983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports on a study of AREEF, a multi-player Underwater Augmented Reality (UWAR) experience for swimming pools. Using off-the-shelf components combined with a custom made waterproof case and an innovative game concept, AREEF puts computer game technology to use for recreational and educational purposes in and under water. After an experience overview, we present evidence gained from a user-centred design-process including a pilot study with 3 kids and a final evaluation with 36 kids. Our discussion covers technical findings regarding marker placement, tracking, and device handling, as well as design related issues like virtual object placement and the need for extremely obvious user interaction and feedback when staging a mobile underwater experience.
{"title":"Playing on AREEF: evaluation of an underwater augmented reality game for kids","authors":"L. Oppermann, Lisa Blum, Marius Shekow","doi":"10.1145/2935334.2935368","DOIUrl":"https://doi.org/10.1145/2935334.2935368","url":null,"abstract":"This paper reports on a study of AREEF, a multi-player Underwater Augmented Reality (UWAR) experience for swimming pools. Using off-the-shelf components combined with a custom made waterproof case and an innovative game concept, AREEF puts computer game technology to use for recreational and educational purposes in and under water. After an experience overview, we present evidence gained from a user-centred design-process including a pilot study with 3 kids and a final evaluation with 36 kids. Our discussion covers technical findings regarding marker placement, tracking, and device handling, as well as design related issues like virtual object placement and the need for extremely obvious user interaction and feedback when staging a mobile underwater experience.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127731311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mehwish Nasim, A. Rextin, Numair Khan, Muhammad Muddasir Malik
In this measurement study, we analyze whether mobile phone users exhibit temporal regularity in their mobile communication. To this end, we collected a mobile phone usage dataset from a developing country -- Pakistan. The data consists of 783 users and 229, 450 communication events. We found a number of interesting patterns both at the aggregate level and at dyadic level in the data. Some interesting results include: the number of calls to different alters consistently follow the rank-size rule; a communication event between an ego-alter(user-contact) pair greatly increases the chances of another communication event; certain ego-alter pairs tend to communicate more over weekends; ego-alter pairs exhibit autocorrelation in various time quantum. Identifying such idiosyncrasies in the ego-alter communication can help improve the calling experience of smartphone users by automatically (smartly) sorting the call log without any manual intervention.
{"title":"Understanding call logs of smartphone users for making future calls","authors":"Mehwish Nasim, A. Rextin, Numair Khan, Muhammad Muddasir Malik","doi":"10.1145/2935334.2935350","DOIUrl":"https://doi.org/10.1145/2935334.2935350","url":null,"abstract":"In this measurement study, we analyze whether mobile phone users exhibit temporal regularity in their mobile communication. To this end, we collected a mobile phone usage dataset from a developing country -- Pakistan. The data consists of 783 users and 229, 450 communication events. We found a number of interesting patterns both at the aggregate level and at dyadic level in the data. Some interesting results include: the number of calls to different alters consistently follow the rank-size rule; a communication event between an ego-alter(user-contact) pair greatly increases the chances of another communication event; certain ego-alter pairs tend to communicate more over weekends; ego-alter pairs exhibit autocorrelation in various time quantum. Identifying such idiosyncrasies in the ego-alter communication can help improve the calling experience of smartphone users by automatically (smartly) sorting the call log without any manual intervention.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an interactive system to capture CAD-like 3D models of indoor scenes, on a mobile device. To overcome sensory and computational limitations of the mobile platform, we employ an in situ, semi-automated approach and harness the user's high-level knowledge of the scene to assist the reconstruction and modeling algorithms. The modeling proceeds in two stages: (1) The user captures the 3D shape and dimensions of the room. (2) The user then uses voice commands and an augmented reality sketching interface to insert objects of interest, such as furniture, artwork, doors and windows. Our system recognizes the sketches and add a corresponding 3D model into the scene at the appropriate location. The key contributions of this work are the design of a multi-modal user interface to effectively capture the user's semantic understanding of the scene and the underlying algorithms that process the input to produce useful reconstructions.
{"title":"In situ CAD capture","authors":"Aditya Sankar, S. Seitz","doi":"10.1145/2935334.2935337","DOIUrl":"https://doi.org/10.1145/2935334.2935337","url":null,"abstract":"We present an interactive system to capture CAD-like 3D models of indoor scenes, on a mobile device. To overcome sensory and computational limitations of the mobile platform, we employ an in situ, semi-automated approach and harness the user's high-level knowledge of the scene to assist the reconstruction and modeling algorithms. The modeling proceeds in two stages: (1) The user captures the 3D shape and dimensions of the room. (2) The user then uses voice commands and an augmented reality sketching interface to insert objects of interest, such as furniture, artwork, doors and windows. Our system recognizes the sketches and add a corresponding 3D model into the scene at the appropriate location. The key contributions of this work are the design of a multi-modal user interface to effectively capture the user's semantic understanding of the scene and the underlying algorithms that process the input to produce useful reconstructions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Games & learning","authors":"C. Santoro","doi":"10.1145/3254092","DOIUrl":"https://doi.org/10.1145/3254092","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Input techniques","authors":"A. Lucero","doi":"10.1145/3254088","DOIUrl":"https://doi.org/10.1145/3254088","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115724265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Supporting visual impairment","authors":"F. Paternò","doi":"10.1145/3254086","DOIUrl":"https://doi.org/10.1145/3254086","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124142642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A majority of Stroke survivors have an arm impairment (up to 80%), which persists over the long term (>12 months). Physiotherapy experts believe that a rehabilitation Aide-Memoire could help these patients [25]. Hence, we designed, with the input of physiotherapists, Stroke experts and former Stroke patients, the Aide-Memoire Stroke (AIMS) App to help them remember to exercise more frequently. We evaluated its use in a controlled field evaluation on a smartphone, tablet and smartwatch. Since one of the main features of the app is to remind Stroke survivors to exercise we also investigated reminder modalities (i.e., visual, vibrate, audio, speech). One key finding is that Stroke survivors opted for a combination of modalities to remind them to conduct their exercises. Also, Stroke survivors seem to prefer smartphones compared to other mobile devices due to their ease of use, usability, familiarity and being easier to handle with one arm.
{"title":"Time to exercise!: an aide-memoire stroke app for post-stroke arm rehabilitation","authors":"Nicholas Micallef, L. Baillie, Stephen Uzor","doi":"10.1145/2935334.2935338","DOIUrl":"https://doi.org/10.1145/2935334.2935338","url":null,"abstract":"A majority of Stroke survivors have an arm impairment (up to 80%), which persists over the long term (>12 months). Physiotherapy experts believe that a rehabilitation Aide-Memoire could help these patients [25]. Hence, we designed, with the input of physiotherapists, Stroke experts and former Stroke patients, the Aide-Memoire Stroke (AIMS) App to help them remember to exercise more frequently. We evaluated its use in a controlled field evaluation on a smartphone, tablet and smartwatch. Since one of the main features of the app is to remind Stroke survivors to exercise we also investigated reminder modalities (i.e., visual, vibrate, audio, speech). One key finding is that Stroke survivors opted for a combination of modalities to remind them to conduct their exercises. Also, Stroke survivors seem to prefer smartphones compared to other mobile devices due to their ease of use, usability, familiarity and being easier to handle with one arm.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115433629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, C. Asakawa
Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.
{"title":"NavCog: a navigational cognitive assistant for the blind","authors":"D. Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, C. Asakawa","doi":"10.1145/2935334.2935361","DOIUrl":"https://doi.org/10.1145/2935334.2935361","url":null,"abstract":"Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120893281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Voice interactions on mobile phones are most often used to augment or supplement touch based interactions for users' convenience. However, for people with limited hand dexterity caused by various forms of motor-impairments voice interactions can have a significant impact and in some cases even enable independent interaction with a mobile device for the first time. For these users, a Mobile Voice User Interface (M-VUI), which allows for completely hands-free, voice only interaction would provide a high level of accessibility and independence. Implementing such a system requires research to address long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands. In this paper we address these concerns reporting on research conducted to improve the visibility and learnability of voice commands of a M-VUI application being developed on the Android platform. Our research confirmed long standing challenges with voice interactions while exploring several methods for improving the onboarding and learning experience. Based on our findings we offer a set of implications for the design of M-VUIs.
{"title":"What can I say?: addressing user experience challenges of a mobile voice user interface for accessibility","authors":"E. Corbett, Astrid Weber","doi":"10.1145/2935334.2935386","DOIUrl":"https://doi.org/10.1145/2935334.2935386","url":null,"abstract":"Voice interactions on mobile phones are most often used to augment or supplement touch based interactions for users' convenience. However, for people with limited hand dexterity caused by various forms of motor-impairments voice interactions can have a significant impact and in some cases even enable independent interaction with a mobile device for the first time. For these users, a Mobile Voice User Interface (M-VUI), which allows for completely hands-free, voice only interaction would provide a high level of accessibility and independence. Implementing such a system requires research to address long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands. In this paper we address these concerns reporting on research conducted to improve the visibility and learnability of voice commands of a M-VUI application being developed on the Android platform. Our research confirmed long standing challenges with voice interactions while exploring several methods for improving the onboarding and learning experience. Based on our findings we offer a set of implications for the design of M-VUIs.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121329920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}