More people are turning to apps for connecting with others nearby for a range of relational goals (i.e. dates, sex). These apps themselves constrain profiles in certain ways, while also supplementing them with additional system-generated cues. The proposed experiments are designed to investigate three such popular cues (distance, time, and number of friends) and how they affect the impression formation process in this context of varied relational goals.
{"title":"Distance, time, and friends: system-generated cues and impression formation in mediated spaces","authors":"Colin Fitzpatrick","doi":"10.1145/2957265.2963112","DOIUrl":"https://doi.org/10.1145/2957265.2963112","url":null,"abstract":"More people are turning to apps for connecting with others nearby for a range of relational goals (i.e. dates, sex). These apps themselves constrain profiles in certain ways, while also supplementing them with additional system-generated cues. The proposed experiments are designed to investigate three such popular cues (distance, time, and number of friends) and how they affect the impression formation process in this context of varied relational goals.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121082391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most of current virtual/augmented reality games focus on players' immersive experience. The creative process of game design is usually not accessible to the player. In this paper, we present a new game genre that combines player initiated game design with game play. This new genre enables users to design games in a physical space and then play in a rendered virtual space. To this aim we illustrate our conceptual design of a virtual reality game, called UbiMaze, which promotes player participation, and provides a rich, interactive and engaging experience.
{"title":"UbiMaze: a new genre of virtual reality game based on mobile devices","authors":"Wei Gai, Chenglei Yang, Yulong Bian, Mingda Dong, Juan Liu, Yifan Dong, Chengjie Niu, Cheng Lin, Xiangxu Meng, Chia Shen","doi":"10.1145/2957265.2961824","DOIUrl":"https://doi.org/10.1145/2957265.2961824","url":null,"abstract":"Most of current virtual/augmented reality games focus on players' immersive experience. The creative process of game design is usually not accessible to the player. In this paper, we present a new game genre that combines player initiated game design with game play. This new genre enables users to design games in a physical space and then play in a rendered virtual space. To this aim we illustrate our conceptual design of a virtual reality game, called UbiMaze, which promotes player participation, and provides a rich, interactive and engaging experience.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128806715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mashhadi, Akhil Mathur, M. V. D. Broeck, G. Vanderhulst, M. Godon, F. Kawsar
Face-to-face interactions have proven to accelerate team and larger organisation success. Many past research has explored the benefits of quantifying face-to-face interactions for informed workplace management, with little attention being paid to how this information is perceived by the employees. In this paper, we offer a reflection on the automated feedback of personal interactions in a workplace through a longitudinal study of capturing, modelling and visualisation of face-to-face interactions of 47 employees for 4 months in an industrial research lab in Europe. We conducted semi-structured interviews with 20 employees to understand their perception and experience with the system. Our findings suggest that the short-term feedback on personal face-to-face interactions was not perceived as an effective external cue to promote self-reflection by most, and that employees desire long-term feedback annotated with actionable attributes.
{"title":"A case study on capturing and visualising face-to-face interactions in the workplace","authors":"A. Mashhadi, Akhil Mathur, M. V. D. Broeck, G. Vanderhulst, M. Godon, F. Kawsar","doi":"10.1145/2957265.2957272","DOIUrl":"https://doi.org/10.1145/2957265.2957272","url":null,"abstract":"Face-to-face interactions have proven to accelerate team and larger organisation success. Many past research has explored the benefits of quantifying face-to-face interactions for informed workplace management, with little attention being paid to how this information is perceived by the employees. In this paper, we offer a reflection on the automated feedback of personal interactions in a workplace through a longitudinal study of capturing, modelling and visualisation of face-to-face interactions of 47 employees for 4 months in an industrial research lab in Europe. We conducted semi-structured interviews with 20 employees to understand their perception and experience with the system. Our findings suggest that the short-term feedback on personal face-to-face interactions was not perceived as an effective external cue to promote self-reflection by most, and that employees desire long-term feedback annotated with actionable attributes.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128586044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinyi Wang, O. Juhlin, Erika Blomgren, Linnea Bågander, Evelin Kägo, Florian Meier, Mariko Takahashi, Clemens Thornquist
This paper presents a design workshop that explores the future of fashionable wearable technology focusing on aesthetics. The results of the workshop include four fashion design concepts and the implications emerged from the discussions on each concept during the workshop. These implications open up new design space of technologies and materials that account for aesthetics beyond traditional fabric, i.e. transparency, scale, irregularity, movement, contextual expressions and fashion intelligence.
{"title":"Design space of the new materials for fashionable wearables","authors":"Jinyi Wang, O. Juhlin, Erika Blomgren, Linnea Bågander, Evelin Kägo, Florian Meier, Mariko Takahashi, Clemens Thornquist","doi":"10.1145/2957265.2965023","DOIUrl":"https://doi.org/10.1145/2957265.2965023","url":null,"abstract":"This paper presents a design workshop that explores the future of fashionable wearable technology focusing on aesthetics. The results of the workshop include four fashion design concepts and the implications emerged from the discussions on each concept during the workshop. These implications open up new design space of technologies and materials that account for aesthetics beyond traditional fabric, i.e. transparency, scale, irregularity, movement, contextual expressions and fashion intelligence.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124490566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frederic Kerber, Christoph Hirtz, Sven Gehring, Markus Löchtefeld, A. Krüger
The ongoing development of smart, wearable devices opens up a new range of possibilities with respect to human-computer interaction. Recent research has confirmed that smartwatches are primarily used to visualize notifications. However, the limited screen size is at odds with the ever-growing amount of information. Often, explicit interaction is needed to get an overview on the currently available information. We provide an aggregation/filtering approach as well as several displaying concepts based on a self-built, power-efficient smartwatch prototype with twelve full-color LEDs around a low-resolution display. In a user study with twelve participants, we evaluated our concepts, and we conclude with guidelines that could easily be applied to today's smartwatches to provide more expressive notification systems.
{"title":"Managing smartwatch notifications through filtering and ambient illumination","authors":"Frederic Kerber, Christoph Hirtz, Sven Gehring, Markus Löchtefeld, A. Krüger","doi":"10.1145/2957265.2962657","DOIUrl":"https://doi.org/10.1145/2957265.2962657","url":null,"abstract":"The ongoing development of smart, wearable devices opens up a new range of possibilities with respect to human-computer interaction. Recent research has confirmed that smartwatches are primarily used to visualize notifications. However, the limited screen size is at odds with the ever-growing amount of information. Often, explicit interaction is needed to get an overview on the currently available information. We provide an aggregation/filtering approach as well as several displaying concepts based on a self-built, power-efficient smartwatch prototype with twelve full-color LEDs around a low-resolution display. In a user study with twelve participants, we evaluated our concepts, and we conclude with guidelines that could easily be applied to today's smartwatches to provide more expressive notification systems.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117223900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyunchul Lim, G. An, YoonKyong Cho, Kyogu Lee, B. Suh
As mobile users often operate their devices with one enhancing one-handed interaction. In this paper, we present WhichHand, a system that 1) automatically detects which hand is holding a mobile phone and then 2) enhances user interfaces by adapting layouts to left-or right-handed use. For WhichHand, we utilize orientation sensors from a smartphone and a smartwatch. The relationship of sensor data between two mobile devices plays an important role in our recognition system. We evaluated WhichHand in a controlled study with 14 participants and conducted a user study with 10 participants to receive feedback. The accuracy of over 97% and early feedback on WhichHand provide useful insights on the design for one-handed interaction.
{"title":"WhichHand: automatic recognition of a smartphone's position in the hand using a smartwatch","authors":"Hyunchul Lim, G. An, YoonKyong Cho, Kyogu Lee, B. Suh","doi":"10.1145/2957265.2961857","DOIUrl":"https://doi.org/10.1145/2957265.2961857","url":null,"abstract":"As mobile users often operate their devices with one enhancing one-handed interaction. In this paper, we present WhichHand, a system that 1) automatically detects which hand is holding a mobile phone and then 2) enhances user interfaces by adapting layouts to left-or right-handed use. For WhichHand, we utilize orientation sensors from a smartphone and a smartwatch. The relationship of sensor data between two mobile devices plays an important role in our recognition system. We evaluated WhichHand in a controlled study with 14 participants and conducted a user study with 10 participants to receive feedback. The accuracy of over 97% and early feedback on WhichHand provide useful insights on the design for one-handed interaction.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128023763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work proposes the use of system to perform affective-ready, contextual and automated usability tests for mobile software. Our proposal augments the traditional methods of software usability evaluation by monitoring users' location, weather conditions, moving/stationary status, data connection availability and spontaneous facial expressions automatically. This aims to identify the moment of negative and positive events. Identifying those situations and systematically associating them to the context of interaction, assisted software creators to overcome design flaws and enhancing interfaces' strengths. The validation of our approach include post-test questionnaires with test subjects. The results indicate that the automated user-context logging can be a substantial supplement to mobile software usability tests.
{"title":"Affective-ready, contextual and automated usability test for mobile software","authors":"Jackson Feijó Filho, Wilson Prata, Juan Oliveira","doi":"10.1145/2957265.2961834","DOIUrl":"https://doi.org/10.1145/2957265.2961834","url":null,"abstract":"This work proposes the use of system to perform affective-ready, contextual and automated usability tests for mobile software. Our proposal augments the traditional methods of software usability evaluation by monitoring users' location, weather conditions, moving/stationary status, data connection availability and spontaneous facial expressions automatically. This aims to identify the moment of negative and positive events. Identifying those situations and systematically associating them to the context of interaction, assisted software creators to overcome design flaws and enhancing interfaces' strengths. The validation of our approach include post-test questionnaires with test subjects. The results indicate that the automated user-context logging can be a substantial supplement to mobile software usability tests.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134291459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eye tracking can be an easy way for identifying users' focus of attention and interests. This promise triggered large and continues research and technology development efforts with remarkable results. In this paper we aim at developing a novel technique for location awareness, interest detection and focus of attention using computer vision techniques and mobile eye-tracking technology. Our focus will be on museum visit and optimizing the positioning procedure by exploiting the visit style to choose the appropriate algorithm.
{"title":"A novel image based positioning technique using mobile eye tracker for a museum visit","authors":"Moayad Mokatren, T. Kuflik, I. Shimshoni","doi":"10.1145/2957265.2962647","DOIUrl":"https://doi.org/10.1145/2957265.2962647","url":null,"abstract":"Eye tracking can be an easy way for identifying users' focus of attention and interests. This promise triggered large and continues research and technology development efforts with remarkable results. In this paper we aim at developing a novel technique for location awareness, interest detection and focus of attention using computer vision techniques and mobile eye-tracking technology. Our focus will be on museum visit and optimizing the positioning procedure by exploiting the visit style to choose the appropriate algorithm.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130358391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra Voit, Tonja Machulla, Dominik Weber, V. Schwind, Stefan Schneegass, N. Henze
Notifications are a core mechanism of current smart devices. They inform about a variety of events including messages, social network comments, and application updates. While users appreciate the awareness that notifications provide, notifications cause distraction, higher cognitive load, and task interruptions. With the increasing importance of smart environments, the number of sensors that could trigger notifications will increase dramatically. A flower with a moisture sensor, for example, could create a notification whenever the flower needs water. We assume that current notification mechanisms will not scale with the increasing number of notifications. We therefore explore notification mechanisms for smart homes. Notifications are shown on smartphones, on displays in the environment, next to the sending objects, or on the user's body. In an online survey, we compare the four locations in four scenarios. While different aspects influence the perceived suitability of each notification location, the smartphone generally is rated the best.
{"title":"Exploring notifications in smart home environments","authors":"Alexandra Voit, Tonja Machulla, Dominik Weber, V. Schwind, Stefan Schneegass, N. Henze","doi":"10.1145/2957265.2962661","DOIUrl":"https://doi.org/10.1145/2957265.2962661","url":null,"abstract":"Notifications are a core mechanism of current smart devices. They inform about a variety of events including messages, social network comments, and application updates. While users appreciate the awareness that notifications provide, notifications cause distraction, higher cognitive load, and task interruptions. With the increasing importance of smart environments, the number of sensors that could trigger notifications will increase dramatically. A flower with a moisture sensor, for example, could create a notification whenever the flower needs water. We assume that current notification mechanisms will not scale with the increasing number of notifications. We therefore explore notification mechanisms for smart homes. Notifications are shown on smartphones, on displays in the environment, next to the sending objects, or on the user's body. In an online survey, we compare the four locations in four scenarios. While different aspects influence the perceived suitability of each notification location, the smartphone generally is rated the best.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133797855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Kultsova, R. Romanenko, I. Zhukova, A. Usov, Nikita Penskoy, Tatiana Potapova
This paper describes the mobile application 'Travel and Communication Assistant' which supports the mobility and communication of people with Intellectual and Development Disabilities (IDD). This application provides the possibility to people with IDD to independently perform a known route (for example a route from home to the day care center, from home to the baker's, etc.) under the remote supervision of their caregivers and to communicate with them using text, voice and pictogram messages.
{"title":"Assistive mobile application for support of mobility and communication of people with IDD","authors":"M. Kultsova, R. Romanenko, I. Zhukova, A. Usov, Nikita Penskoy, Tatiana Potapova","doi":"10.1145/2957265.2965003","DOIUrl":"https://doi.org/10.1145/2957265.2965003","url":null,"abstract":"This paper describes the mobile application 'Travel and Communication Assistant' which supports the mobility and communication of people with Intellectual and Development Disabilities (IDD). This application provides the possibility to people with IDD to independently perform a known route (for example a route from home to the day care center, from home to the baker's, etc.) under the remote supervision of their caregivers and to communicate with them using text, voice and pictogram messages.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123843694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}