Pradthana Jarusriboonchai, Thomas Olsson, S. Lyckvi, Kaisa Väänänen
Mobile phones have become common tools for photography. Despite the fact that photos are social artifacts, mobile phones afford the act of photo taking only as an individual activity. Photo taking that involves more than one photographer has been envisioned to create positive outcomes and experiences. We implemented this vision with mobile camera phones, exploring how this would influence photo taking practices and experiences. We conducted a user study where altogether 22 participants (11 pairs) were using a novel mobile photography method based on asymmetrical interaction abilities, comparing that with two traditional methods. We present the collaborative practices emerged in different photography methods and report user experience findings particularly with regard to enforced collaboration in mobile photo taking. The results highlight benefits and positive experiences in collaborative photo taking. We discuss lessons learned and point out design implications that come into play when designing for mobile collocated collaboration.
{"title":"Let's take photos together: exploring asymmetrical interaction abilities on mobile camera phones","authors":"Pradthana Jarusriboonchai, Thomas Olsson, S. Lyckvi, Kaisa Väänänen","doi":"10.1145/2935334.2935385","DOIUrl":"https://doi.org/10.1145/2935334.2935385","url":null,"abstract":"Mobile phones have become common tools for photography. Despite the fact that photos are social artifacts, mobile phones afford the act of photo taking only as an individual activity. Photo taking that involves more than one photographer has been envisioned to create positive outcomes and experiences. We implemented this vision with mobile camera phones, exploring how this would influence photo taking practices and experiences. We conducted a user study where altogether 22 participants (11 pairs) were using a novel mobile photography method based on asymmetrical interaction abilities, comparing that with two traditional methods. We present the collaborative practices emerged in different photography methods and report user experience findings particularly with regard to enforced collaboration in mobile photo taking. The results highlight benefits and positive experiences in collaborative photo taking. We discuss lessons learned and point out design implications that come into play when designing for mobile collocated collaboration.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125900078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vidya Sarangapani, Ahmed Kharrufa, Madeline Balaam, D. Leat, Peter C. Wright
Cross-cultural learning has gained increased interest and importance within school curricula in recent years. Schools are using technology to accumulate resources for cross-cultural learning, which has predominantly been pre-prepared videos, documentaries, photos and textual information available online. In this paper we describe the engagement with video technology on mobile smartphones by three migrant families who were tasked with developing cross-cultural resources over the course of six weeks. The resources developed were then used as a learning resource in a classroom and feedback was taken from the teacher. Our study has established that mobile phones particularly smartphones are an accessible, evocative and affordable avenue to aid in the development of cross-cultural resources alongside building stronger parental engagement in schools. The study contributes an expansion of knowledge in research areas that seek to use video technology on mobile phones to build cross-cultural resources for learning and strengthen home-school and school-home communication.
{"title":"Virtual.Cultural.Collaboration: mobile phones, video technology, and cross-cultural learning","authors":"Vidya Sarangapani, Ahmed Kharrufa, Madeline Balaam, D. Leat, Peter C. Wright","doi":"10.1145/2935334.2935354","DOIUrl":"https://doi.org/10.1145/2935334.2935354","url":null,"abstract":"Cross-cultural learning has gained increased interest and importance within school curricula in recent years. Schools are using technology to accumulate resources for cross-cultural learning, which has predominantly been pre-prepared videos, documentaries, photos and textual information available online. In this paper we describe the engagement with video technology on mobile smartphones by three migrant families who were tasked with developing cross-cultural resources over the course of six weeks. The resources developed were then used as a learning resource in a classroom and feedback was taken from the teacher. Our study has established that mobile phones particularly smartphones are an accessible, evocative and affordable avenue to aid in the development of cross-cultural resources alongside building stronger parental engagement in schools. The study contributes an expansion of knowledge in research areas that seek to use video technology on mobile phones to build cross-cultural resources for learning and strengthen home-school and school-home communication.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129618354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kenji Suzuki, K. Okabe, R. Sakamoto, Daisuke Sakamoto
We present a concept of using a movable background to navigate a caret on small mobile devices. The standard approach to selecting text on mobile devices is to directly touch the location on the text that a user wants to select. This is problematic because the user's finger hides the area to select. Our concept is to use a movable background to navigate the caret. Users place a caret by tapping on the screen and then move the background by touching and dragging. In this method, the caret is fixed on the screen and the user drags the background text to navigate the caret. We compared our technique with the iPhone's default UI and found that even though participants were using our technique for the first time, average task completion time was not different or even faster than Default UI in the case of the small font size and got a significantly higher usability score than Default UI.
{"title":"Fix and slide: caret navigation with movable background","authors":"Kenji Suzuki, K. Okabe, R. Sakamoto, Daisuke Sakamoto","doi":"10.1145/2935334.2935357","DOIUrl":"https://doi.org/10.1145/2935334.2935357","url":null,"abstract":"We present a concept of using a movable background to navigate a caret on small mobile devices. The standard approach to selecting text on mobile devices is to directly touch the location on the text that a user wants to select. This is problematic because the user's finger hides the area to select. Our concept is to use a movable background to navigate the caret. Users place a caret by tapping on the screen and then move the background by touching and dragging. In this method, the caret is fixed on the screen and the user drags the background text to navigate the caret. We compared our technique with the iPhone's default UI and found that even though participants were using our technique for the first time, average task completion time was not different or even faster than Default UI in the case of the small font size and got a significantly higher usability score than Default UI.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130985363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Participation","authors":"K. Church","doi":"10.1145/3254093","DOIUrl":"https://doi.org/10.1145/3254093","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a new generation of smartwatches enters the market, one common use is for displaying information such as notifications. While some content might warrant immediately interrupting a user, there is also information that might be important to display yet less urgent. It would be useful to show this content on the watch but not immediately draw the user's attention away from their primary task. In this paper, we investigate how fast three visual parameters draw a user's attention. In particular, we present data from a smartwatch user study where we examine the size, frequency, and color of a visual prompt and the associated impact on reaction time. We find statistically significant differences for size and frequency where smaller and slower result in the less immediate reactions. We also present reaction time distributions that a designer can use to tailor expected notification response times to match their content.
{"title":"Visual parameters impacting reaction times on smartwatches","authors":"Kent Lyons","doi":"10.1145/2935334.2935344","DOIUrl":"https://doi.org/10.1145/2935334.2935344","url":null,"abstract":"As a new generation of smartwatches enters the market, one common use is for displaying information such as notifications. While some content might warrant immediately interrupting a user, there is also information that might be important to display yet less urgent. It would be useful to show this content on the watch but not immediately draw the user's attention away from their primary task. In this paper, we investigate how fast three visual parameters draw a user's attention. In particular, we present data from a smartwatch user study where we examine the size, frequency, and color of a visual prompt and the associated impact on reaction time. We find statistically significant differences for size and frequency where smaller and slower result in the less immediate reactions. We also present reaction time distributions that a designer can use to tailor expected notification response times to match their content.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131421969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Devices with touchscreens have an inherent latency. When a user's finger drags an object across the screen the object follows with a latency of around 100ms for current devices. Previous work showed that latencies down to 25ms reduce users' performance and that even 10ms latency is noticeable. In this paper we demonstrate an approach that reduces latency using a predictive model. Extrapolating the finger's movement we predict where the finger will be in the next moment. Comparing different prediction approaches we show for three different tasks that prediction using neural networks is more precise than linear and polynomial extrapolation. Furthermore, we show through a Fitts' Law dragging experiment that reducing touch latency can significantly increases users' performance. As the approach is software-based it can easily be integrated into existing mobile applications and systems.
{"title":"Software-reduced touchscreen latency","authors":"N. Henze, Markus Funk, Alireza Sahami Shirazi","doi":"10.1145/2935334.2935381","DOIUrl":"https://doi.org/10.1145/2935334.2935381","url":null,"abstract":"Devices with touchscreens have an inherent latency. When a user's finger drags an object across the screen the object follows with a latency of around 100ms for current devices. Previous work showed that latencies down to 25ms reduce users' performance and that even 10ms latency is noticeable. In this paper we demonstrate an approach that reduces latency using a predictive model. Extrapolating the finger's movement we predict where the finger will be in the next moment. Comparing different prediction approaches we show for three different tasks that prediction using neural networks is more precise than linear and polynomial extrapolation. Furthermore, we show through a Fitts' Law dragging experiment that reducing touch latency can significantly increases users' performance. As the approach is software-based it can easily be integrated into existing mobile applications and systems.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128262289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.
智能手表的屏幕尺寸提供了有限的空间来支持富有表现力的多点触控输入,导致体验明显困难和有限。我们推出WatchMI: Watch Movement Input,增强智能手表的触摸交互,支持连续的压力触摸、扭转、平移手势及其组合。我们的新方法依赖于软件,实时分析来自内置惯性测量单元(IMU)的数据,以便在不需要额外硬件或修改手表的情况下,以极高的精度和不同粒度的级别确定用户执行的动作。我们报告了系统的评估结果,并证明了三种建议的输入接口准确,抗噪声,易于使用,可以部署在各种智能手表上。然后,我们用七个不同的应用程序展示了这项工作的潜力,包括地图导航,闹钟,音乐播放器,pan手势识别,文本输入,文件资源管理器和控制远程设备或游戏角色。
{"title":"WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches","authors":"H. Yeo, Juyoung Lee, Andrea Bianchi, A. Quigley","doi":"10.1145/2935334.2935375","DOIUrl":"https://doi.org/10.1145/2935334.2935375","url":null,"abstract":"The screen size of a smartwatch provides limited space to enable expressive multi-touch input, resulting in a markedly difficult and limited experience. We present WatchMI: Watch Movement Input that enhances touch interaction on a smartwatch to support continuous pressure touch, twist, pan gestures and their combinations. Our novel approach relies on software that analyzes, in real-time, the data from a built-in Inertial Measurement Unit (IMU) in order to determine with great accuracy and different levels of granularity the actions performed by the user, without requiring additional hardware or modification of the watch. We report the results of an evaluation with the system, and demonstrate that the three proposed input interfaces are accurate, noise-resistant, easy to use and can be deployed on a variety of smartwatches. We then showcase the potential of this work with seven different applications including, map navigation, an alarm clock, a music player, pan gesture recognition, text entry, file explorer and controlling remote devices or a game character.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130979437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Wearables","authors":"M. Serrano","doi":"10.1145/3254089","DOIUrl":"https://doi.org/10.1145/3254089","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122521464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Matviienko, Andreas Löcken, Abdallah El Ali, Wilko Heuten, Susanne CJ Boll
Car navigation systems typically combine multiple output modalities; for example, GPS-based navigation aids show a real-time map, or feature spoken prompts indicating upcoming maneuvers. However, the drawback of graphical navigation displays is that drivers have to explicitly glance at them, which can distract from a situation on the road. To decrease driver distraction while driving with a navigation system, we explore the use of ambient light as a navigation aid in the car, in order to shift navigation aids to the periphery of human attention. We investigated this by conducting studies in a driving simulator, where we found that drivers spent significantly less time glancing at the ambient light navigation aid than on a GUI navigation display. Moreover, ambient light-based navigation was perceived to be easy to use and understand, and preferred over traditional GUI navigation displays. We discuss the implications of these outcomes on automotive personal navigation devices.
{"title":"NaviLight: investigating ambient light displays for turn-by-turn navigation in cars","authors":"A. Matviienko, Andreas Löcken, Abdallah El Ali, Wilko Heuten, Susanne CJ Boll","doi":"10.1145/2935334.2935359","DOIUrl":"https://doi.org/10.1145/2935334.2935359","url":null,"abstract":"Car navigation systems typically combine multiple output modalities; for example, GPS-based navigation aids show a real-time map, or feature spoken prompts indicating upcoming maneuvers. However, the drawback of graphical navigation displays is that drivers have to explicitly glance at them, which can distract from a situation on the road. To decrease driver distraction while driving with a navigation system, we explore the use of ambient light as a navigation aid in the car, in order to shift navigation aids to the periphery of human attention. We investigated this by conducting studies in a driving simulator, where we found that drivers spent significantly less time glancing at the ambient light navigation aid than on a GUI navigation display. Moreover, ambient light-based navigation was perceived to be easy to use and understand, and preferred over traditional GUI navigation displays. We discuss the implications of these outcomes on automotive personal navigation devices.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127751406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zlatko Franjcic, Paweł W. Woźniak, Gabriele Kasparaviciute, M. Fjeld
Motion tracking systems are gaining popularity and have a number of applications in research, entertainment, and arts. These systems must be calibrated before use. This process requires extensive user effort to determine a 3D coordinate system with acceptable accuracy. Usually, this is achieved by rapidly manipulating a calibration device (e.g. a calibration wand) in a volume for a set amount of time. While this is a complex spatial input task, improving the user experience of calibration inspired little research. This paper presents the design, implementation, and evaluation of WAVI --- a prototype device mounted on a calibration wand to jointly provide visual and tactile feedback during the calibration process. We conducted a user study that showed that the device significantly increases calibration quality without increasing user effort. Based on our experiences with WAVI, we present new insights for improving motion tracking calibration and complex spatial input.
{"title":"WAVI: improving motion capture calibration using haptic and visual feedback","authors":"Zlatko Franjcic, Paweł W. Woźniak, Gabriele Kasparaviciute, M. Fjeld","doi":"10.1145/2935334.2935374","DOIUrl":"https://doi.org/10.1145/2935334.2935374","url":null,"abstract":"Motion tracking systems are gaining popularity and have a number of applications in research, entertainment, and arts. These systems must be calibrated before use. This process requires extensive user effort to determine a 3D coordinate system with acceptable accuracy. Usually, this is achieved by rapidly manipulating a calibration device (e.g. a calibration wand) in a volume for a set amount of time. While this is a complex spatial input task, improving the user experience of calibration inspired little research. This paper presents the design, implementation, and evaluation of WAVI --- a prototype device mounted on a calibration wand to jointly provide visual and tactile feedback during the calibration process. We conducted a user study that showed that the device significantly increases calibration quality without increasing user effort. Based on our experiences with WAVI, we present new insights for improving motion tracking calibration and complex spatial input.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}