It is well-documented that ICT are designed mostly with young users in mind. In addition, most studies about smartphone use do not include older people or even consider age differences. Consequently, little is known about how to design smartphone apps taking older people's interests into account. We have used a mixed-method approach with an intergenerational perspective to approach this topic. First, we track the smartphone activities of 238 panelists. Second, we conduct an online survey (382 respondents). Third, we document the experiences of a group of older people in a smartphone learning club. We have found specific media consumption and communication patterns among older individuals: for example, at home they are more prone to jumping between devices for ergonomic reasons, thus, cross-device interactions are key for this group. We discuss the relevance of intergenerational studies in counterbalancing the spread of age stereotypes and identifying alternative adoption trends.
{"title":"Smartphones, apps and older people's interests: from a generational perspective","authors":"A. Rosales, M. Fernández-Ardèvol","doi":"10.1145/2935334.2935363","DOIUrl":"https://doi.org/10.1145/2935334.2935363","url":null,"abstract":"It is well-documented that ICT are designed mostly with young users in mind. In addition, most studies about smartphone use do not include older people or even consider age differences. Consequently, little is known about how to design smartphone apps taking older people's interests into account. We have used a mixed-method approach with an intergenerational perspective to approach this topic. First, we track the smartphone activities of 238 panelists. Second, we conduct an online survey (382 respondents). Third, we document the experiences of a group of older people in a smartphone learning club. We have found specific media consumption and communication patterns among older individuals: for example, at home they are more prone to jumping between devices for ergonomic reasons, thus, cross-device interactions are key for this group. We discuss the relevance of intergenerational studies in counterbalancing the spread of age stereotypes and identifying alternative adoption trends.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125195669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radu-Daniel Vatavu, Annette Mossel, Christian Schönauer
We investigate in this work users' perceptions of interacting with invisible, zero-weight digital matter for smart mobile scenarios. To this end, we introduce the concept of a digital vibron as vibrational manifestation of a digital object located outside its container device. We exemplify gesture-based interactions for digital vibrons and show how thinking about interactions in terms of digital vibrons can lead to new interactive experiences in the physical-digital space. We present the results of a user study that showed high scores of users' perceived experience, usability, and desirability, and we discuss users' preferences for vibration patterns to inform the design of vibrotactile feedback for digital vibrons. We hope that this work will inspire researchers and practitioners to further explore and develop digital vibrons to design localized vibrotactile feedback for digital objects outside their smart devices toward new interactive experiences in the physical-digital space.
{"title":"Digital vibrons: understanding users' perceptions of interacting with invisible, zero-weight matter","authors":"Radu-Daniel Vatavu, Annette Mossel, Christian Schönauer","doi":"10.1145/2935334.2935364","DOIUrl":"https://doi.org/10.1145/2935334.2935364","url":null,"abstract":"We investigate in this work users' perceptions of interacting with invisible, zero-weight digital matter for smart mobile scenarios. To this end, we introduce the concept of a digital vibron as vibrational manifestation of a digital object located outside its container device. We exemplify gesture-based interactions for digital vibrons and show how thinking about interactions in terms of digital vibrons can lead to new interactive experiences in the physical-digital space. We present the results of a user study that showed high scores of users' perceived experience, usability, and desirability, and we discuss users' preferences for vibration patterns to inform the design of vibrotactile feedback for digital vibrons. We hope that this work will inspire researchers and practitioners to further explore and develop digital vibrons to design localized vibrotactile feedback for digital objects outside their smart devices toward new interactive experiences in the physical-digital space.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127434038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Heller, Jayan Jevanesan, P. Dietrich, Jan O. Borchers
Mobile audio augmented reality systems (MAARS) simulate virtual audio sources in a physical space via headphones. While 20 years ago, these required expensive sensing and rendering equipment, the necessary technology has become widely available. Smartphones have become capable to run high-fidelity spatial audio rendering algorithms, and modern sensors can provide rich data to the rendering process. Combined, these constitute an inexpensive, powerful platform for audio augmented reality. We evaluated the practical limitations of currently available off-the-shelf hardware using a voice sample in a lab experiment. State of the art motion sensors provide multiple degrees of freedom, including pitch and roll angles instead of yaw only. Since our rendering algorithm is also capable of including this richer sensor data in terms of source elevation, we also measured its impact on sound localization. Results show that mobile audio augmented reality systems achieve the same horizontal resolution as stationary systems. We found that including pitch and roll angles did not significantly improve the users' localization performance.
{"title":"Where are we?: evaluating the current rendering fidelity of mobile audio augmented reality systems","authors":"Florian Heller, Jayan Jevanesan, P. Dietrich, Jan O. Borchers","doi":"10.1145/2935334.2935365","DOIUrl":"https://doi.org/10.1145/2935334.2935365","url":null,"abstract":"Mobile audio augmented reality systems (MAARS) simulate virtual audio sources in a physical space via headphones. While 20 years ago, these required expensive sensing and rendering equipment, the necessary technology has become widely available. Smartphones have become capable to run high-fidelity spatial audio rendering algorithms, and modern sensors can provide rich data to the rendering process. Combined, these constitute an inexpensive, powerful platform for audio augmented reality. We evaluated the practical limitations of currently available off-the-shelf hardware using a voice sample in a lab experiment. State of the art motion sensors provide multiple degrees of freedom, including pitch and roll angles instead of yaw only. Since our rendering algorithm is also capable of including this richer sensor data in terms of source elevation, we also measured its impact on sound localization. Results show that mobile audio augmented reality systems achieve the same horizontal resolution as stationary systems. We found that including pitch and roll angles did not significantly improve the users' localization performance.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125071044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Sociability","authors":"Cosmin Munteanu","doi":"10.1145/3254097","DOIUrl":"https://doi.org/10.1145/3254097","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125775545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sunyoung Kim, Krzysztof Z Gajos, Michael J. Muller, B. Grosz
Mobile technologies offer the potential for enhanced healthcare, especially by supporting self-management of chronic care. For these technologies to impact chronic care, they need to work for older adults, because the majority of people with chronic conditions are older. A major challenge remains: integrating the appropriate use of such technologies into the lives of older adults. We investigated how older adults would accept mobile technologies by interviewing two groups of older adults (technology adopters and non-adopters who aged 60+) about their experiences and perspectives to mobile technologies. Our preliminary results indicate that there is an additional phase, the intention to learn, and three relating factors, self-efficacy, conversion readiness, and peer support, that significantly influence the acceptance of mobile technologies among the participants, but are not represented in the existing models. With these findings, we propose a tentative theoretical model that extends the existing theories to explain the ways in which our participants came to accept mobile technologies. Future work should investigate the validity of the proposed model by testing our findings against younger people.
{"title":"Acceptance of mobile technology by older adults: a preliminary study","authors":"Sunyoung Kim, Krzysztof Z Gajos, Michael J. Muller, B. Grosz","doi":"10.1145/2935334.2935380","DOIUrl":"https://doi.org/10.1145/2935334.2935380","url":null,"abstract":"Mobile technologies offer the potential for enhanced healthcare, especially by supporting self-management of chronic care. For these technologies to impact chronic care, they need to work for older adults, because the majority of people with chronic conditions are older. A major challenge remains: integrating the appropriate use of such technologies into the lives of older adults. We investigated how older adults would accept mobile technologies by interviewing two groups of older adults (technology adopters and non-adopters who aged 60+) about their experiences and perspectives to mobile technologies. Our preliminary results indicate that there is an additional phase, the intention to learn, and three relating factors, self-efficacy, conversion readiness, and peer support, that significantly influence the acceptance of mobile technologies among the participants, but are not represented in the existing models. With these findings, we propose a tentative theoretical model that extends the existing theories to explain the ways in which our participants came to accept mobile technologies. Future work should investigate the validity of the proposed model by testing our findings against younger people.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126064000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized considerably and delivers a wide range of haptic output. However, prototyping EMS applications is challenging. It requires detailed knowledge and skills about hardware, software, and physiological characteristics. To simplify prototyping with EMS in mobile and wearable situations we present the Let Your Body Move toolkit. It consists of (1) a hardware control module with Bluetooth communication that uses off-the-shelf EMS devices as signal generators, (2) a simple communications protocol to connect mobile devices, and (3) a set of control applications as starting points for EMS prototyping. We describe EMS-specific parameters, electrode placements on the skin, and user calibration. The toolkit was evaluated in a workshop with 10 researchers in haptics. The results show that the toolkit allows to quickly generate non-trivial prototypes. The hardware schematics and software components are available as open source software.
肌肉电刺激(EMS)是一种很有前途的可穿戴式触觉输出技术,因为它具有小型化和提供大范围的触觉输出的特点。然而,对EMS应用程序进行原型设计是具有挑战性的。它需要详细的硬件、软件和生理特性方面的知识和技能。为了简化EMS在移动和可穿戴环境下的原型设计,我们提出了Let Your Body Move工具包。它包括(1)一个具有蓝牙通信的硬件控制模块,它使用现成的EMS设备作为信号发生器,(2)一个简单的通信协议来连接移动设备,以及(3)一组控制应用程序作为EMS原型设计的起点。我们描述了ems特定的参数,电极在皮肤上的位置,和用户校准。该工具包在一个由10名触觉研究人员组成的研讨会上进行了评估。结果表明,该工具包允许快速生成重要的原型。硬件原理图和软件组件可以作为开源软件获得。
{"title":"Let your body move: a prototyping toolkit for wearable force feedback with electrical muscle stimulation","authors":"Max Pfeiffer, Tim Duente, M. Rohs","doi":"10.1145/2935334.2935348","DOIUrl":"https://doi.org/10.1145/2935334.2935348","url":null,"abstract":"Electrical muscle stimulation (EMS) is a promising wearable haptic output technology as it can be miniaturized considerably and delivers a wide range of haptic output. However, prototyping EMS applications is challenging. It requires detailed knowledge and skills about hardware, software, and physiological characteristics. To simplify prototyping with EMS in mobile and wearable situations we present the Let Your Body Move toolkit. It consists of (1) a hardware control module with Bluetooth communication that uses off-the-shelf EMS devices as signal generators, (2) a simple communications protocol to connect mobile devices, and (3) a set of control applications as starting points for EMS prototyping. We describe EMS-specific parameters, electrode placements on the skin, and user calibration. The toolkit was evaluated in a workshop with 10 researchers in haptics. The results show that the toolkit allows to quickly generate non-trivial prototypes. The hardware schematics and software components are available as open source software.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122547249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current soft keyboards for emoji entry all present emoji in the same way: in long lists, spread over several categories. While categories limit the number of emoji in each individual list, the overall number is still so large, that emoji entry is a challenging task. The task takes particularly long if users pick the wrong category when searching for an emoji. Instead, we propose a new zooming keyboard for emoji entry. Here, users can see all emoji at once, aiding in building spatial memory where related emoji are to be found. We compare our zooming emoji keyboard against the Google keyboard and find that our keyboard allows for 18% faster emoji entry, reducing the required time for one emoji from 15.6 s to 12.7 s. A preliminary longitudinal evaluation with three participants showed that emoji entry time over the duration of the study improved at up to 60 % to a final average of 7.5 s.
{"title":"EmojiZoom: emoji entry via large overview maps 😄🔍","authors":"Henning Pohl, D. Stanke, M. Rohs","doi":"10.1145/2935334.2935382","DOIUrl":"https://doi.org/10.1145/2935334.2935382","url":null,"abstract":"Current soft keyboards for emoji entry all present emoji in the same way: in long lists, spread over several categories. While categories limit the number of emoji in each individual list, the overall number is still so large, that emoji entry is a challenging task. The task takes particularly long if users pick the wrong category when searching for an emoji. Instead, we propose a new zooming keyboard for emoji entry. Here, users can see all emoji at once, aiding in building spatial memory where related emoji are to be found. We compare our zooming emoji keyboard against the Google keyboard and find that our keyboard allows for 18% faster emoji entry, reducing the required time for one emoji from 15.6 s to 12.7 s. A preliminary longitudinal evaluation with three participants showed that emoji entry time over the duration of the study improved at up to 60 % to a final average of 7.5 s.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130536932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because smartwatches are worn on the wrist, they do not require users to hold the device, leaving at least one hand free to engage in other activities. Unfortunately, this benefit is thwarted by the typical interaction model of smartwatches; for interactions beyond glancing at information or using speech, users must utilize their other hand to manipulate a touchscreen and/or hardware buttons. In order to enable no-touch, wrist-only smartwatch interactions so that users can, for example, hold a cup of coffee while controlling their device, we explore two tilt-based interaction techniques for menu selection and navigation: AnglePoint, which directly maps the position of a virtual pointer to the tilt angle of the smartwatch, and ObjectPoint, which objectifies the underlying virtual pointer as an object imbued with a physics model. In a user study, we found that participants were able to perform menu selection and continuous selection of menu items as well as navigation through a menu hierarchy more quickly and accurately with ObjectPoint, even though previous research on tilt for other mobile devices suggested that AnglePoint would be more effective. We provide an explanation of our results and discuss the implications for more "hands-free" smartwatch interactions.
{"title":"Exploring tilt for no-touch, wrist-only interactions on smartwatches","authors":"Anhong Guo, Tim Paek","doi":"10.1145/2935334.2935345","DOIUrl":"https://doi.org/10.1145/2935334.2935345","url":null,"abstract":"Because smartwatches are worn on the wrist, they do not require users to hold the device, leaving at least one hand free to engage in other activities. Unfortunately, this benefit is thwarted by the typical interaction model of smartwatches; for interactions beyond glancing at information or using speech, users must utilize their other hand to manipulate a touchscreen and/or hardware buttons. In order to enable no-touch, wrist-only smartwatch interactions so that users can, for example, hold a cup of coffee while controlling their device, we explore two tilt-based interaction techniques for menu selection and navigation: AnglePoint, which directly maps the position of a virtual pointer to the tilt angle of the smartwatch, and ObjectPoint, which objectifies the underlying virtual pointer as an object imbued with a physics model. In a user study, we found that participants were able to perform menu selection and continuous selection of menu items as well as navigation through a menu hierarchy more quickly and accurately with ObjectPoint, even though previous research on tilt for other mobile devices suggested that AnglePoint would be more effective. We provide an explanation of our results and discuss the implications for more \"hands-free\" smartwatch interactions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129231938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing popularity of smartwatches over the last years, there has been a substantial interest in novel input methods for such small devices. However, feedback modalities for smartwatches have not seen the same level of interest. This is surprising, as one of the primary function of smartwatches is their use for notifications. It is the interrupting nature of current notifications on smartwatches that has also drawn some of the more critical responses to them. Here, we present a subtle notification mechanism for smartwatches that uses light scattering in a wearer's skin as a feedback modality. This does not disrupt the wearer in the same way as vibration feedback and also connects more naturally with the user's body.
{"title":"ScatterWatch: subtle notifications via indirect illumination scattered in the skin","authors":"Henning Pohl, Justyna Medrek, M. Rohs","doi":"10.1145/2935334.2935351","DOIUrl":"https://doi.org/10.1145/2935334.2935351","url":null,"abstract":"With the increasing popularity of smartwatches over the last years, there has been a substantial interest in novel input methods for such small devices. However, feedback modalities for smartwatches have not seen the same level of interest. This is surprising, as one of the primary function of smartwatches is their use for notifications. It is the interrupting nature of current notifications on smartwatches that has also drawn some of the more critical responses to them. Here, we present a subtle notification mechanism for smartwatches that uses light scattering in a wearer's skin as a feedback modality. This does not disrupt the wearer in the same way as vibration feedback and also connects more naturally with the user's body.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124216204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Carter, Jennifer Marlow, A. Komori, Ville Mäkelä
Most teleconferencing tools treat users in distributed meetings monolithically: all participants are meant to be interconnected in more-or-less the same manner. In practice, people connect to meetings in different contexts, sometimes sitting in front of a laptop or tablet giving their full attention, but at other times mobile and concurrently involved in other tasks or as a liminal participant in a larger group meeting. In this paper, we present the design and evaluation of two applications, MixMeetWear and MixMeetMate, to help users in non-standard contexts flexibly participate in meetings.
{"title":"Bringing mobile into meetings: enhancing distributed meeting participation on smartwatches and mobile phones","authors":"S. Carter, Jennifer Marlow, A. Komori, Ville Mäkelä","doi":"10.1145/2935334.2935355","DOIUrl":"https://doi.org/10.1145/2935334.2935355","url":null,"abstract":"Most teleconferencing tools treat users in distributed meetings monolithically: all participants are meant to be interconnected in more-or-less the same manner. In practice, people connect to meetings in different contexts, sometimes sitting in front of a laptop or tablet giving their full attention, but at other times mobile and concurrently involved in other tasks or as a liminal participant in a larger group meeting. In this paper, we present the design and evaluation of two applications, MixMeetWear and MixMeetMate, to help users in non-standard contexts flexibly participate in meetings.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116349993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}