This industrial perspective contribution describes the development of a prototype of a mobile worker assistance system to be used in the assembly line for motors that is changing to single piece flow. A large variety of user-centered methods were chosen to cover all phases of the development. Users were included in the analysis and evaluation phases. The results show that the methods are suitable for this type of development. Extended tests with users indicate that the prototype was very acceptable, especially for people with less experience on the assembly line. Since the system was tested in a real environment for five days with a number of users, the company now feels confident investing in the development of the system.
{"title":"User-centered development of a system to support assembly line worker","authors":"Boban Blazevski, Jean D. Hallewell Haslwanter","doi":"10.1145/3098279.3119840","DOIUrl":"https://doi.org/10.1145/3098279.3119840","url":null,"abstract":"This industrial perspective contribution describes the development of a prototype of a mobile worker assistance system to be used in the assembly line for motors that is changing to single piece flow. A large variety of user-centered methods were chosen to cover all phases of the development. Users were included in the analysis and evaluation phases. The results show that the methods are suitable for this type of development. Extended tests with users indicate that the prototype was very acceptable, especially for people with less experience on the assembly line. Since the system was tested in a real environment for five days with a number of users, the company now feels confident investing in the development of the system.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130855420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Kettner, Patrick Bader, T. Kosch, Stefan Schneegass, A. Schmidt
Smartphones, wearables, and other mobile devices often use tactile feedback for notifying users. This feedback type proved to be beneficial since it does not occupy the visual or auditory channel. However, it still can be distracting in other situations such as when users are already stressed. To investigate tactile feedback patterns which do not increase the user's stress level, we developed two wrist-worn prototypes capable of providing tactile feedback (i.e., vibrotactile and pressure-based feedback). Further, we conducted a user-study with 14 participants comparing both feedback types. The results suggest that vibrotactile feedback increases the user's stress level more, compared to pressure-based feedback particularly applied when the user currently has a low stress level. Consequently, we present implications for designing notifications for mobile and wearable devices.
{"title":"Towards pressure-based feedback for non-stressful tactile notifications","authors":"R. Kettner, Patrick Bader, T. Kosch, Stefan Schneegass, A. Schmidt","doi":"10.1145/3098279.3122132","DOIUrl":"https://doi.org/10.1145/3098279.3122132","url":null,"abstract":"Smartphones, wearables, and other mobile devices often use tactile feedback for notifying users. This feedback type proved to be beneficial since it does not occupy the visual or auditory channel. However, it still can be distracting in other situations such as when users are already stressed. To investigate tactile feedback patterns which do not increase the user's stress level, we developed two wrist-worn prototypes capable of providing tactile feedback (i.e., vibrotactile and pressure-based feedback). Further, we conducted a user-study with 14 participants comparing both feedback types. The results suggest that vibrotactile feedback increases the user's stress level more, compared to pressure-based feedback particularly applied when the user currently has a low stress level. Consequently, we present implications for designing notifications for mobile and wearable devices.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127071514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present βTap, a Back-of-device (BoD) tap detection software for mobile devices that uses commodity sensors, without the need to instrument the device. Although just basic interactions are supported (namely, single and double taps), βTap is highly accurate and performance-friendly, since it uses a low-cost yet highly discriminative set of features. Our software is publicly available at the Google Play Store, so that others can build upon our work.
{"title":"βTap: back-of-device tap input with built-in sensors","authors":"Emilio Granell, Luis A. Leiva","doi":"10.1145/3098279.3125440","DOIUrl":"https://doi.org/10.1145/3098279.3125440","url":null,"abstract":"We present βTap, a Back-of-device (BoD) tap detection software for mobile devices that uses commodity sensors, without the need to instrument the device. Although just basic interactions are supported (namely, single and double taps), βTap is highly accurate and performance-friendly, since it uses a low-cost yet highly discriminative set of features. Our software is publicly available at the Google Play Store, so that others can build upon our work.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122938384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ragavendra Lingamaneni, Thomas Kubitza, J. Scheible
In recent years, new type of public displays have captured the interest of researchers-displays with the ability to move freely in three-dimensional space. We refer to such display systems as Airborne Multimedia Display (AMD) systems. In this paper, we provide a comprehensive analysis of requirements for developing interactive AMD applications based on extensive literature survey of the related work and from our own experience of flying AMD systems. We then outline the design and implementation of DroneCAST: a programming toolkit for developing AMD applications with remote delivery and control mechanism for multimedia content with IoT middleware as the core part of the system. Finally, we build a sample AMD application with the toolkit to further illustrate the applicability of the system.
{"title":"DroneCAST: towards a programming toolkit for airborne multimedia display applications","authors":"Ragavendra Lingamaneni, Thomas Kubitza, J. Scheible","doi":"10.1145/3098279.3122128","DOIUrl":"https://doi.org/10.1145/3098279.3122128","url":null,"abstract":"In recent years, new type of public displays have captured the interest of researchers-displays with the ability to move freely in three-dimensional space. We refer to such display systems as Airborne Multimedia Display (AMD) systems. In this paper, we provide a comprehensive analysis of requirements for developing interactive AMD applications based on extensive literature survey of the related work and from our own experience of flying AMD systems. We then outline the design and implementation of DroneCAST: a programming toolkit for developing AMD applications with remote delivery and control mechanism for multimedia content with IoT middleware as the core part of the system. Finally, we build a sample AMD application with the toolkit to further illustrate the applicability of the system.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124505108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although smartphones include a set of sensors that enable innovative interactions, current mobile game interaction is mostly touch-based. Some games also include tilt movement based on the accelerometer sensor. However, sensors like accelerometers and gyroscopes can be used to recognize, in real time, full body motions. Exploring this can lead to innovative and immersive experiences while promoting physical activity. We present a proof of concept 3D endless running game called ActivRunner which implements an activity recognition system that predicts, in real time, 4 activities: standing, move left, move right, squat and jump. The goal is to replace the traditional touch interaction with a more natural movement-based one, showing the potential of this kind of interaction to create innovative and immersive mobile experiences while promoting physical activity.
{"title":"Activity recognition for movement-based interaction in mobile games","authors":"Alexandre Almeida, Ana Alves","doi":"10.1145/3098279.3125443","DOIUrl":"https://doi.org/10.1145/3098279.3125443","url":null,"abstract":"Although smartphones include a set of sensors that enable innovative interactions, current mobile game interaction is mostly touch-based. Some games also include tilt movement based on the accelerometer sensor. However, sensors like accelerometers and gyroscopes can be used to recognize, in real time, full body motions. Exploring this can lead to innovative and immersive experiences while promoting physical activity. We present a proof of concept 3D endless running game called ActivRunner which implements an activity recognition system that predicts, in real time, 4 activities: standing, move left, move right, squat and jump. The goal is to replace the traditional touch interaction with a more natural movement-based one, showing the potential of this kind of interaction to create innovative and immersive mobile experiences while promoting physical activity.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126607675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A head-mounted display (HMD) immerses users in a virtual world, but separates them from outsiders in the real world. We present FrontFace, which is a novel HMD that combines an eye-tracker with a front-facing screen, to lower the communication barrier between HMD users and outsiders. The front-facing screen reveals user attention (e.g., the users eye motions) and user presence in the virtual or real world by displaying the scene in the virtual world or a skin background respectively, enabling eye-contact interactions between the HMD user and the outsiders. FrontFace has the following benefits. Firstly, it communicates the presence of the HMD user to outsiders; secondly, it reveals the player's visual attention by introducing the HMD users originally occluded eye motions, enabling outsiders to make sense of the HMD user's reaction in the virtual world or the real world. Three interactive techniques for the outsiders to initiate communication to HMD users are proposed: they are tap-trigger, hand-gesture trigger, and voice-trigger interactions. A small focus group provided feedback.
{"title":"FrontFace: facilitating communication between HMD users and outsiders using front-facing-screen HMDs","authors":"Liwei Chan, K. Minamizawa","doi":"10.1145/3098279.3098548","DOIUrl":"https://doi.org/10.1145/3098279.3098548","url":null,"abstract":"A head-mounted display (HMD) immerses users in a virtual world, but separates them from outsiders in the real world. We present FrontFace, which is a novel HMD that combines an eye-tracker with a front-facing screen, to lower the communication barrier between HMD users and outsiders. The front-facing screen reveals user attention (e.g., the users eye motions) and user presence in the virtual or real world by displaying the scene in the virtual world or a skin background respectively, enabling eye-contact interactions between the HMD user and the outsiders. FrontFace has the following benefits. Firstly, it communicates the presence of the HMD user to outsiders; secondly, it reveals the player's visual attention by introducing the HMD users originally occluded eye motions, enabling outsiders to make sense of the HMD user's reaction in the virtual world or the real world. Three interactive techniques for the outsiders to initiate communication to HMD users are proposed: they are tap-trigger, hand-gesture trigger, and voice-trigger interactions. A small focus group provided feedback.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124430170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Andone, Konrad Blaszkiewicz, Matthias Böhmer, Alexander Markowetz
Pokémon GO was a short lived mobile location-based gaming phenomenon. After its launch in July 2016, it quickly reached 500 million installs, but afterwards interest faded. As part of a large scale "in the wild" mobile phone study we have recorded phone usage and location measurements between June and September 2016. We investigate who were the people who installed and played Pokémon GO and what effects it had on their behaviour. We chose as a middle point the start of playing the game, and selected users that had activity for at least two weeks before and two weeks after it. In this work we present our findings on a sample of 2, 861 users. We compare demographic characteristics and Big Five personality traits of these users with 7, 904 non-playing users from the same time period. The general daily phone usage of players increased on average by 27 minutes, which represents 16% per day. In terms of large scale movement patterns, these did not change, with regard to diameter and total path length per day.
{"title":"Impact of location-based games on phone usage and movement: a case study on Pokémon GO","authors":"I. Andone, Konrad Blaszkiewicz, Matthias Böhmer, Alexander Markowetz","doi":"10.1145/3098279.3122145","DOIUrl":"https://doi.org/10.1145/3098279.3122145","url":null,"abstract":"Pokémon GO was a short lived mobile location-based gaming phenomenon. After its launch in July 2016, it quickly reached 500 million installs, but afterwards interest faded. As part of a large scale \"in the wild\" mobile phone study we have recorded phone usage and location measurements between June and September 2016. We investigate who were the people who installed and played Pokémon GO and what effects it had on their behaviour. We chose as a middle point the start of playing the game, and selected users that had activity for at least two weeks before and two weeks after it. In this work we present our findings on a sample of 2, 861 users. We compare demographic characteristics and Big Five personality traits of these users with 7, 904 non-playing users from the same time period. The general daily phone usage of players increased on average by 27 minutes, which represents 16% per day. In terms of large scale movement patterns, these did not change, with regard to diameter and total path length per day.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121939224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electrical Muscle Stimulation (EMS) recently received considerable attention in the HCI community. By applying small signals to the user's body, different types of movement can be generated. These movements allow designers to create more meaningful and embodied haptic feedback compared to vibrotactile feedback. This advantage also comes with further technical and practical challenges which need to be tackled. These challenges include a fine grained calibration procedure and a close contact to the user's body at specific on-body locations. This tutorial gives an overview about current research projects, challenges, and opportunities to use EMS for providing rich embodied feedback followed by a hands on experience. The main goal of this tutorial is that participants get a basic understanding of how EMS works and how systems that are using EMS can be developed and evaluated.
{"title":"EMS in HCI: challenges and opportunities in actuating human bodies","authors":"Tim Duente, Stefan Schneegass, Max Pfeiffer","doi":"10.1145/3098279.3119920","DOIUrl":"https://doi.org/10.1145/3098279.3119920","url":null,"abstract":"Electrical Muscle Stimulation (EMS) recently received considerable attention in the HCI community. By applying small signals to the user's body, different types of movement can be generated. These movements allow designers to create more meaningful and embodied haptic feedback compared to vibrotactile feedback. This advantage also comes with further technical and practical challenges which need to be tackled. These challenges include a fine grained calibration procedure and a close contact to the user's body at specific on-body locations. This tutorial gives an overview about current research projects, challenges, and opportunities to use EMS for providing rich embodied feedback followed by a hands on experience. The main goal of this tutorial is that participants get a basic understanding of how EMS works and how systems that are using EMS can be developed and evaluated.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117165100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel real-time hand gesture recognition system based on surface electromyography. We employ a user-independent approach based on a support vector machine utilizing ten features extracted from the raw electromyographic data obtained from the Myo armband by Thalmic Labs. Through an improved synchronization approach, we simplified the application process of the sensing armband. We report the results of a user study with 14 participants using an extended set consisting of 40 gestures. Considering the set of five hand gestures currently supported off-the-shelf by the Myo armband, we outperform their approach with an overall accuracy of 95% compared to 68% with the original algorithm on the same dataset.
{"title":"User-independent real-time hand gesture recognition based on surface electromyography","authors":"Frederic Kerber, M. Puhl, A. Krüger","doi":"10.1145/3098279.3098553","DOIUrl":"https://doi.org/10.1145/3098279.3098553","url":null,"abstract":"In this paper, we present a novel real-time hand gesture recognition system based on surface electromyography. We employ a user-independent approach based on a support vector machine utilizing ten features extracted from the raw electromyographic data obtained from the Myo armband by Thalmic Labs. Through an improved synchronization approach, we simplified the application process of the sensing armband. We report the results of a user study with 14 participants using an extended set consisting of 40 gestures. Considering the set of five hand gestures currently supported off-the-shelf by the Myo armband, we outperform their approach with an overall accuracy of 95% compared to 68% with the original algorithm on the same dataset.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115348163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Limited literacy and visual impairment reduce the ability of many to read on their own. Current e-reader solutions rely on either unnatural synthetic voices or professionally produced audio e-books. Neither provide the same enjoyment as having a family member read to a user, especially when the user requires assistive reading (following printed text while listening to it being read). Unfortunately, the support for non-commercial production of such e-books is limited and requires significant effort. We evaluate a novel, assistive mobile interaction technique that facilitates the recording of audio e-books and their synchronization with the read text. We show that a technique based on a finger tracking metaphor provides optimal support with respect to reading speed. These human-in-the-loop, adaptive techniques can now be used to reduce the content-creation burden that is associated with supporting those who cannot read on their own.
{"title":"Finger tracking: facilitating non-commercial content production for mobile e-reading applications","authors":"Carrie Demmans Epp, Cosmin Munteanu, Benett Axtell, Keerthika Ravinthiran, Yomna Aly, Elman Mansimov","doi":"10.1145/3098279.3098556","DOIUrl":"https://doi.org/10.1145/3098279.3098556","url":null,"abstract":"Limited literacy and visual impairment reduce the ability of many to read on their own. Current e-reader solutions rely on either unnatural synthetic voices or professionally produced audio e-books. Neither provide the same enjoyment as having a family member read to a user, especially when the user requires assistive reading (following printed text while listening to it being read). Unfortunately, the support for non-commercial production of such e-books is limited and requires significant effort. We evaluate a novel, assistive mobile interaction technique that facilitates the recording of audio e-books and their synchronization with the read text. We show that a technique based on a finger tracking metaphor provides optimal support with respect to reading speed. These human-in-the-loop, adaptive techniques can now be used to reduce the content-creation burden that is associated with supporting those who cannot read on their own.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129444372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}