{"title":"Session details: Wrist and hand interaction II","authors":"Luis A. Leiva","doi":"10.1145/3254094","DOIUrl":"https://doi.org/10.1145/3254094","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128223426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Paternò, Kaisa Väänänen, K. Church, Jonna Häkkilä, A. Krüger, M. Serrano
MobileHCI brings together people from diverse backgrounds and areas of expertise to provide a truly multidisciplinary forum. Academics, hardware and software developers, designers and practitioners alike can discuss challenges encountered on different frontiers of mobility, as well as potential solutions that will advance the field. The conference covers both academic and industry research, ranging from fundamental interaction models and techniques to social and cultural aspects of everyday life with mobile devices and services.
{"title":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","authors":"F. Paternò, Kaisa Väänänen, K. Church, Jonna Häkkilä, A. Krüger, M. Serrano","doi":"10.1145/2935334","DOIUrl":"https://doi.org/10.1145/2935334","url":null,"abstract":"MobileHCI brings together people from diverse backgrounds and areas of expertise to provide a truly multidisciplinary forum. Academics, hardware and software developers, designers and practitioners alike can discuss challenges encountered on different frontiers of mobility, as well as potential solutions that will advance the field. The conference covers both academic and industry research, ranging from fundamental interaction models and techniques to social and cultural aspects of everyday life with mobile devices and services.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130006429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yun Huang, J. Zimmerman, A. Tomasic, Aaron Steinfeld
Participatory sensing systems use people and their smartphones as a sensing infrastructure, and getting people to make contributions remains a critical challenge. Little work details how system designers should combine different interactions to increase coverage of service location. Tiramisu, a participatory sensing system, invites transit riders to crowdsource real-time arrival information by sharing location traces when they commute. We extended this system with a new feature that allows riders at stops to "spot" buses passing by. To better understand the impact of this new feature, we conducted an observational log analysis, examining changes in coverage and user behavior before and after the new feature. Following the addition of the spotting feature, participants' contributions increased coverage (the number of trips with real-time data) by 98%, and they used the app more than twice as much. The addition of the spotting feature was also followed by a significant increase of trace contributions.
{"title":"Combining contribution interactions to increase coverage in mobile participatory sensing systems","authors":"Yun Huang, J. Zimmerman, A. Tomasic, Aaron Steinfeld","doi":"10.1145/2935334.2935387","DOIUrl":"https://doi.org/10.1145/2935334.2935387","url":null,"abstract":"Participatory sensing systems use people and their smartphones as a sensing infrastructure, and getting people to make contributions remains a critical challenge. Little work details how system designers should combine different interactions to increase coverage of service location. Tiramisu, a participatory sensing system, invites transit riders to crowdsource real-time arrival information by sharing location traces when they commute. We extended this system with a new feature that allows riders at stops to \"spot\" buses passing by. To better understand the impact of this new feature, we conducted an observational log analysis, examining changes in coverage and user behavior before and after the new feature. Following the addition of the spotting feature, participants' contributions increased coverage (the number of trips with real-time data) by 98%, and they used the app more than twice as much. The addition of the spotting feature was also followed by a significant increase of trace contributions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130868963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Corsten, Andreas Link, Thorsten Karrer, Jan O. Borchers
Using a smartphone touchscreen to control apps mirrored to a distant display is hard, since the user cannot see where she is touching while looking at the distant screen. Tactile landmarks at the back of the phone can mitigate this problem, especially in landscape mode [3]: By moving a finger across these landmarks, the user can haptically estimate the finger position in proportion to the touchscreen. Upon pinching the thumb resting above the touchscreen towards that finger at the back, the finger position is transferred to the front and registered as a touch. However, despite proprioception, this technique leads to a shift between back and front position, denoted as pinch error. We investigated this error using different target locations, device thicknesses, and tilt angles to derive target sizes that can be acquired at a 96% success rate.
{"title":"Understanding back-to-front pinching for eyes-free mobile touch input","authors":"Christian Corsten, Andreas Link, Thorsten Karrer, Jan O. Borchers","doi":"10.1145/2935334.2935371","DOIUrl":"https://doi.org/10.1145/2935334.2935371","url":null,"abstract":"Using a smartphone touchscreen to control apps mirrored to a distant display is hard, since the user cannot see where she is touching while looking at the distant screen. Tactile landmarks at the back of the phone can mitigate this problem, especially in landscape mode [3]: By moving a finger across these landmarks, the user can haptically estimate the finger position in proportion to the touchscreen. Upon pinching the thumb resting above the touchscreen towards that finger at the back, the finger position is transferred to the front and registered as a touch. However, despite proprioception, this technique leads to a shift between back and front position, denoted as pinch error. We investigated this error using different target locations, device thicknesses, and tilt angles to derive target sizes that can be acquired at a 96% success rate.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"337 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134327630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Text entry","authors":"A. Quigley","doi":"10.1145/3254085","DOIUrl":"https://doi.org/10.1145/3254085","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114487288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Tools","authors":"Daniel Ashbrook","doi":"10.1145/3254090","DOIUrl":"https://doi.org/10.1145/3254090","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128902663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper investigates the feasibility of using a nail-mounted array of tactors, NailTactors, as an eyes-free output device. By rim-attached eccentric-rotating-mass (ERM) vibrators to artificial nails, miniature high-resolution tactile displays were realized as an eyes-free output device. To understand how to deliver rich signals to users for valid signal perception, three user studies were conducted. The results suggest that users can not only recognized absolute and relative directional cues, but also recognized numerical characters in EdgeWrite format with an overall 89% recognition rate. Experiments also identified the optimal placement of ERM actuators for maximizing information transfer.
{"title":"NailTactors: eyes-free spatial output using a nail-mounted tactor array","authors":"Meng-Ju Hsieh, Rong-Hao Liang, Bing-Yu Chen","doi":"10.1145/2935334.2935358","DOIUrl":"https://doi.org/10.1145/2935334.2935358","url":null,"abstract":"This paper investigates the feasibility of using a nail-mounted array of tactors, NailTactors, as an eyes-free output device. By rim-attached eccentric-rotating-mass (ERM) vibrators to artificial nails, miniature high-resolution tactile displays were realized as an eyes-free output device. To understand how to deliver rich signals to users for valid signal perception, three user studies were conducted. The results suggest that users can not only recognized absolute and relative directional cues, but also recognized numerical characters in EdgeWrite format with an overall 89% recognition rate. Experiments also identified the optimal placement of ERM actuators for maximizing information transfer.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132098556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Javier Hernández, Daniel J. McDuff, Christian Infante, P. Maes, K. Quigley, Rosalind W. Picard
The Experience Sampling Method is widely used for collecting self-report responses from people in natural settings. While most traditional approaches rely on using a phone to trigger prompts and record information, wearable devices now offer new opportunities that may improve this method. This research quantitatively and qualitatively studies the experience sampling process on head-worn and wrist-worn wearable devices, and compares them to the traditional "smartphone in the pocket." To enable this work, we designed and implemented a custom application to provide similar prompts across the three types of devices and evaluated it with 15 individuals for five days (75 days total), in the context of real-life stress measurement. We found significant differences in response times across devices, and captured tradeoffs in interaction types, screen size, and device familiarity that can affect both users' experience and the reports made by users.
{"title":"Wearable ESM: differences in the experience sampling method across wearable devices","authors":"Javier Hernández, Daniel J. McDuff, Christian Infante, P. Maes, K. Quigley, Rosalind W. Picard","doi":"10.1145/2935334.2935340","DOIUrl":"https://doi.org/10.1145/2935334.2935340","url":null,"abstract":"The Experience Sampling Method is widely used for collecting self-report responses from people in natural settings. While most traditional approaches rely on using a phone to trigger prompts and record information, wearable devices now offer new opportunities that may improve this method. This research quantitatively and qualitatively studies the experience sampling process on head-worn and wrist-worn wearable devices, and compares them to the traditional \"smartphone in the pocket.\" To enable this work, we designed and implemented a custom application to provide similar prompts across the three types of devices and evaluated it with 15 individuals for five days (75 days total), in the context of real-life stress measurement. We found significant differences in response times across devices, and captured tradeoffs in interaction types, screen size, and device familiarity that can affect both users' experience and the reports made by users.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125496503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We explore if and how identifying the character of face-to-face conversations can help manage notifications on smartphones so that they become less disruptive. We show that the social dimensions depth/importance and formality/goal orientation of a conversation are strong indicators of receptiveness. Furthermore, we find that there are types of conversation, e.g. small talk, in which individuals are even more receptive to notifications than in situations without any verbal social interaction at all. This refutes the assumption currently found in the literature that the occurrence of a conversation is a strong predictor of unavailability. We demonstrate a system that tracks conversations in which the user is engaged and that analyzes speech in terms of embedded affective and social cues. Eventually, we find that information of either kind, derived from audio, improves the accuracy of personal notification preference models substantially.
{"title":"Conversational context helps improve mobile notification management","authors":"Florian Schulze, Georg Groh","doi":"10.1145/2935334.2935347","DOIUrl":"https://doi.org/10.1145/2935334.2935347","url":null,"abstract":"We explore if and how identifying the character of face-to-face conversations can help manage notifications on smartphones so that they become less disruptive. We show that the social dimensions depth/importance and formality/goal orientation of a conversation are strong indicators of receptiveness. Furthermore, we find that there are types of conversation, e.g. small talk, in which individuals are even more receptive to notifications than in situations without any verbal social interaction at all. This refutes the assumption currently found in the literature that the occurrence of a conversation is a strong predictor of unavailability. We demonstrate a system that tracks conversations in which the user is engaged and that analyzes speech in terms of embedded affective and social cues. Eventually, we find that information of either kind, derived from audio, improves the accuracy of personal notification preference models substantially.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128582765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a quantitative and qualitative analysis of interruptions of interaction with a public display game, and explore the use of a manual pause mode in this scenario. In previous public display installations we observed users frequently interrupting their interaction. To explore ways of supporting such behavior, we implemented a gesture controlled multiuser game with four pausing techniques. We evaluated them in a field study analyzing 704 users and found that our pausing techniques were eagerly explored, but rarely used with the intention to pause the game. Our study shows that interactions with public displays are considerably intermissive, and that users mostly interrupt interaction to socialize and mainly approach public displays in groups. We conclude that, as a typical characteristic of public display interaction, interruptions deserve consideration. However, manual pause modes are not well suited for games on public displays. Instead, interruptions should be implicitly supported by the application design.
{"title":"Interruption and pausing of public display games","authors":"Tiare M. Feuchtner, Robert Walter, Jörg Müller","doi":"10.1145/2935334.2935335","DOIUrl":"https://doi.org/10.1145/2935334.2935335","url":null,"abstract":"We present a quantitative and qualitative analysis of interruptions of interaction with a public display game, and explore the use of a manual pause mode in this scenario. In previous public display installations we observed users frequently interrupting their interaction. To explore ways of supporting such behavior, we implemented a gesture controlled multiuser game with four pausing techniques. We evaluated them in a field study analyzing 704 users and found that our pausing techniques were eagerly explored, but rarely used with the intention to pause the game. Our study shows that interactions with public displays are considerably intermissive, and that users mostly interrupt interaction to socialize and mainly approach public displays in groups. We conclude that, as a typical characteristic of public display interaction, interruptions deserve consideration. However, manual pause modes are not well suited for games on public displays. Instead, interruptions should be implicitly supported by the application design.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121737156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}