Location cloaking methods enable the protection of private location data. Different temporal and spatial approaches to cloak a specific user location (e.g., k-anonymity) have been suggested. Besides the research focusing on functionality, little work has been done on how cloaking methods should be presented to the user. In practice common location referencing services force the user to either accept or deny exact positioning. Therefore, users are not enabled to regulate private location information on a granular level. To improve the usage of location cloaking methods and foster location privacy protection, we conducted a user study (N = 24) comparing different visualized cloaking methods. The results of our lab study revealed a preference for visualizations using already known and well understood real world entities. Thus, the usage of simple and real world concepts can contribute to the application of cloaking methods and subsequently to location privacy protection.
{"title":"Which cloak dresses you best?: comparing location cloaking methods for mobile users","authors":"Susen Döbelt, Johann Schrammel, M. Tscheligi","doi":"10.1145/3098279.3122138","DOIUrl":"https://doi.org/10.1145/3098279.3122138","url":null,"abstract":"Location cloaking methods enable the protection of private location data. Different temporal and spatial approaches to cloak a specific user location (e.g., k-anonymity) have been suggested. Besides the research focusing on functionality, little work has been done on how cloaking methods should be presented to the user. In practice common location referencing services force the user to either accept or deny exact positioning. Therefore, users are not enabled to regulate private location information on a granular level. To improve the usage of location cloaking methods and foster location privacy protection, we conducted a user study (N = 24) comparing different visualized cloaking methods. The results of our lab study revealed a preference for visualizations using already known and well understood real world entities. Thus, the usage of simple and real world concepts can contribute to the application of cloaking methods and subsequently to location privacy protection.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122791144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile devices like smartphones facilitate parallel activities of IT-interaction and other tasks. Especially smartphone usage during walking can often be observed. This can lead to distracted walking which frequently results in serious consequences for the user and others. As people will probably go on using mobile devices in this way, e.g., for navigation, we have to provide interaction interfaces which are adapted to the special needs of secondary usage while walking. One way to adapt apps to walking is to reduce time for content perception. A substantial part of content is given in written language, and therefore, reading is of high importance. Based on this, we pursued the reduction of reading time on a mobile device during walking. We conducted a study which provides information about the minimum acceptable letter sizes for reading single words while walking. We carried out an experiment in which we combined short presentation times of commonly used words with different letter sizes and analyzed the outcome by means of psychometric functions. We administered these functions for the independent variables presentation time, walking speed vs. standing and length of words (number of letters) and analyzed the outcome statistically in respect to the minimal visual angle. We found highly significant influences of all three conditions on the legibility of the words. We recommend the adaptation of applications for walking according to our findings.
{"title":"Influence of letter size on word reading performance during walking","authors":"J. Conradi","doi":"10.1145/3098279.3098554","DOIUrl":"https://doi.org/10.1145/3098279.3098554","url":null,"abstract":"Mobile devices like smartphones facilitate parallel activities of IT-interaction and other tasks. Especially smartphone usage during walking can often be observed. This can lead to distracted walking which frequently results in serious consequences for the user and others. As people will probably go on using mobile devices in this way, e.g., for navigation, we have to provide interaction interfaces which are adapted to the special needs of secondary usage while walking. One way to adapt apps to walking is to reduce time for content perception. A substantial part of content is given in written language, and therefore, reading is of high importance. Based on this, we pursued the reduction of reading time on a mobile device during walking. We conducted a study which provides information about the minimum acceptable letter sizes for reading single words while walking. We carried out an experiment in which we combined short presentation times of commonly used words with different letter sizes and analyzed the outcome by means of psychometric functions. We administered these functions for the independent variables presentation time, walking speed vs. standing and length of words (number of letters) and analyzed the outcome statistically in respect to the minimal visual angle. We found highly significant influences of all three conditions on the legibility of the words. We recommend the adaptation of applications for walking according to our findings.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124981881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Limin Zeng, G. Weber, Markus Simros, Peter D. Conradie, Jelle Saldien, I. Ravyse, J. V. Erp, T. Mioch
In the paper we present our Range-IT prototype, which is a 3D depth camera based electronic travel aid (ETA) to assist visually impaired people in finding out detailed information of surrounding objects. In addition to detecting indoor obstacles and identifying several objects of interest (e.g., walls, open doors and stairs) up to 7 meters, the Range-IT system employs a multimodal audio-vibrotactile user interface to present this spatial information.
{"title":"Range-IT: detection and multimodal presentation of indoor objects for visually impaired people","authors":"Limin Zeng, G. Weber, Markus Simros, Peter D. Conradie, Jelle Saldien, I. Ravyse, J. V. Erp, T. Mioch","doi":"10.1145/3098279.3125442","DOIUrl":"https://doi.org/10.1145/3098279.3125442","url":null,"abstract":"In the paper we present our Range-IT prototype, which is a 3D depth camera based electronic travel aid (ETA) to assist visually impaired people in finding out detailed information of surrounding objects. In addition to detecting indoor obstacles and identifying several objects of interest (e.g., walls, open doors and stairs) up to 7 meters, the Range-IT system employs a multimodal audio-vibrotactile user interface to present this spatial information.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131303258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jennifer Pearson, Simon Robinson, Matt Jones, C. Coutrix
This research forms part of a wider body of work focused around involving emergent users---those just beginning to get access to mobile devices---in the development and refinement of far-future technologies. In this paper we present an evaluation of a new type of deformable slider with emergent users, designed to investigate whether shape-changing interfaces provide any benefit over touchscreens for this type of user. Our trials, which took place in two contexts and three disparate regions, revealed that while there was a clear correlation between performance and technology exposure, emergent users had similar ability with both touchscreen and deformable controls.
{"title":"Evaluating deformable devices with emergent users","authors":"Jennifer Pearson, Simon Robinson, Matt Jones, C. Coutrix","doi":"10.1145/3098279.3098555","DOIUrl":"https://doi.org/10.1145/3098279.3098555","url":null,"abstract":"This research forms part of a wider body of work focused around involving emergent users---those just beginning to get access to mobile devices---in the development and refinement of far-future technologies. In this paper we present an evaluation of a new type of deformable slider with emergent users, designed to investigate whether shape-changing interfaces provide any benefit over touchscreens for this type of user. Our trials, which took place in two contexts and three disparate regions, revealed that while there was a clear correlation between performance and technology exposure, emergent users had similar ability with both touchscreen and deformable controls.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130440109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yashoda Bhavnani, K. Rodden, Laura Cuozzo Guarnotta, Margaret T. Lynn, S. Chizari, Laura A. Granka
It can be very challenging to get an accurate understanding of mobile phone usage because of the difficulty of observing phone activity in a natural setting. We describe a retrospective methodology where participants review visualizations of their logged activity in an interview setting, and our lessons learned in applying this methodology in a study of user goals and journeys on mobile devices across apps.
{"title":"Understanding mobile phone activities via retrospective review of visualizations of usage data","authors":"Yashoda Bhavnani, K. Rodden, Laura Cuozzo Guarnotta, Margaret T. Lynn, S. Chizari, Laura A. Granka","doi":"10.1145/3098279.3119841","DOIUrl":"https://doi.org/10.1145/3098279.3119841","url":null,"abstract":"It can be very challenging to get an accurate understanding of mobile phone usage because of the difficulty of observing phone activity in a natural setting. We describe a retrospective methodology where participants review visualizations of their logged activity in an interview setting, and our lessons learned in applying this methodology in a study of user goals and journeys on mobile devices across apps.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133391296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper argues that integrating entrepreneurial thinking into the field of User Experience (UX) increases the efficacy of user-centered design approaches in industrial settings. The argument unfolds in three steps: (1) the element of "problem solving" is identified as being the core and the common denominator of both user- centered design and entrepreneurial thinking, (2) the connection between both approaches is shown by focusing on processes as well as mental models, and (3) it is explained how the extension of UX by entrepreneurial elements leads to an increased impact and area of influence in industrial settings.
{"title":"Entrepreneurial & UX mindsets: two perspectives - one objective","authors":"C. Sturm, Maha Aly, B. V. Schmidt, T. Flatten","doi":"10.1145/3098279.3119912","DOIUrl":"https://doi.org/10.1145/3098279.3119912","url":null,"abstract":"This paper argues that integrating entrepreneurial thinking into the field of User Experience (UX) increases the efficacy of user-centered design approaches in industrial settings. The argument unfolds in three steps: (1) the element of \"problem solving\" is identified as being the core and the common denominator of both user- centered design and entrepreneurial thinking, (2) the connection between both approaches is shown by focusing on processes as well as mental models, and (3) it is explained how the extension of UX by entrepreneurial elements leads to an increased impact and area of influence in industrial settings.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132898283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Othman, Telmo Amaral, Roisin Mcnaney, Jan David Smeddinck, John Vines, P. Olivier
Current eye tracking technologies have a number of drawbacks when it comes to practical use in real-world settings. Common challenges, such as high levels of daylight, eyewear (e.g. spectacles or contact lenses) and eye make-up, give rise to noise that undermines their utility as a standard component for mobile computing, design, and evaluation. To work around these challenges, we introduce CrowdEyes, a mobile eye tracking solution that utilizes crowdsourcing for increased tracking accuracy and robustness. We present a pupil detection task design for crowd workers together with a study that demonstrates the high-level accuracy of crowdsourced pupil detection in comparison to state-of-the-art pupil detection algorithms. We further demonstrate the utility of our crowdsourced analysis pipeline in a fixation tagging task. In this paper, we validate the accuracy and robustness of harnessing the crowd as both an alternative and complement to automated pupil detection algorithms, and explore the associated costs and quality of our crowdsourcing approach.
{"title":"CrowdEyes: crowdsourcing for robust real-world mobile eye tracking","authors":"Mohammad Othman, Telmo Amaral, Roisin Mcnaney, Jan David Smeddinck, John Vines, P. Olivier","doi":"10.1145/3098279.3098559","DOIUrl":"https://doi.org/10.1145/3098279.3098559","url":null,"abstract":"Current eye tracking technologies have a number of drawbacks when it comes to practical use in real-world settings. Common challenges, such as high levels of daylight, eyewear (e.g. spectacles or contact lenses) and eye make-up, give rise to noise that undermines their utility as a standard component for mobile computing, design, and evaluation. To work around these challenges, we introduce CrowdEyes, a mobile eye tracking solution that utilizes crowdsourcing for increased tracking accuracy and robustness. We present a pupil detection task design for crowd workers together with a study that demonstrates the high-level accuracy of crowdsourced pupil detection in comparison to state-of-the-art pupil detection algorithms. We further demonstrate the utility of our crowdsourced analysis pipeline in a fixation tagging task. In this paper, we validate the accuracy and robustness of harnessing the crowd as both an alternative and complement to automated pupil detection algorithms, and explore the associated costs and quality of our crowdsourcing approach.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122029035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sven Mayer, P. Gad, Katrin Wolf, Paweł W. Woźniak, N. Henze
While most current interactive surfaces use only the position of the finger on the surface as the input source, previous work suggests using the finger orientation for enriching the input space. Thus, an understanding of the physiological restrictions of the hand is required to build effective interactive techniques that use finger orientation. We conducted a study to derive the ergonomic constraints for using finger orientation as an effective input source. In a controlled experiment, we systematically manipulated finger pitch and yaw while performing a touch action. Participants were asked to rate the feasibility of the touch action. We found that finger pitch and yaw do significantly affect perceived feasibility and 21.1% of the touch actions were perceived as impossible to perform. Our results show that the finger yaw input space can be divided into the comfort and non-comfort zones. We further present design considerations for future interfaces using finger orientation.
{"title":"Understanding the ergonomic constraints in designing for touch surfaces","authors":"Sven Mayer, P. Gad, Katrin Wolf, Paweł W. Woźniak, N. Henze","doi":"10.1145/3098279.3098537","DOIUrl":"https://doi.org/10.1145/3098279.3098537","url":null,"abstract":"While most current interactive surfaces use only the position of the finger on the surface as the input source, previous work suggests using the finger orientation for enriching the input space. Thus, an understanding of the physiological restrictions of the hand is required to build effective interactive techniques that use finger orientation. We conducted a study to derive the ergonomic constraints for using finger orientation as an effective input source. In a controlled experiment, we systematically manipulated finger pitch and yaw while performing a touch action. Participants were asked to rate the feasibility of the touch action. We found that finger pitch and yaw do significantly affect perceived feasibility and 21.1% of the touch actions were perceived as impossible to perform. Our results show that the finger yaw input space can be divided into the comfort and non-comfort zones. We further present design considerations for future interfaces using finger orientation.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127446106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simone Kriglstein, Mario Brandmüller, M. Pohl, Christine Bauer
Due to the high potential of digital media to support learning processes and outcomes, educational games have gained wide acceptance over the years. The combination of mobile devices with location-based technologies offers new options and possibilities for the development of educational games in consideration of learners' environment with the positive side effect to promote learner's physical activities. This paper introduces a mobile educational game for promoting a better understanding of concepts related to route problems and route optimization on the basis of real world examples in a playful manner. The game combines problem-solving tasks with a quiz to teach concepts related to the Traveling Salesman Problem (TSP) by using the Global Positioning System (GPS) technology.
{"title":"A location-based educational game for understanding the traveling salesman problem: a case study","authors":"Simone Kriglstein, Mario Brandmüller, M. Pohl, Christine Bauer","doi":"10.1145/3098279.3122130","DOIUrl":"https://doi.org/10.1145/3098279.3122130","url":null,"abstract":"Due to the high potential of digital media to support learning processes and outcomes, educational games have gained wide acceptance over the years. The combination of mobile devices with location-based technologies offers new options and possibilities for the development of educational games in consideration of learners' environment with the positive side effect to promote learner's physical activities. This paper introduces a mobile educational game for promoting a better understanding of concepts related to route problems and route optimization on the basis of real world examples in a playful manner. The game combines problem-solving tasks with a quiz to teach concepts related to the Traveling Salesman Problem (TSP) by using the Global Positioning System (GPS) technology.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123353705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electrical muscle stimulation (EMS) has been used successfully in HCI to generate force feedback and simple movements both in stationary and mobile settings. However, many natural limb movements require the coordinated actuation of multiple muscles. Off-the-shelf EMS devices are typically limited in their ability to generate fine-grained movements, because they only have a low number of channels and do not provide full control over the EMS parameters. More capable medical devices are not designed for mobile use or still have a lower number of channels and less control than is desirable for HCI research. In this paper we present the concept and a prototype of a 20-channel mobile EMS system that offers full control over the EMS parameters. We discuss the requirements of wearable multi-electrode EMS systems and present the design and technical evaluation of our prototype. We further outline several application scenarios and discuss safety and certification issues.
{"title":"Zap++: a 20-channel electrical muscle stimulation system for fine-grained wearable force feedback","authors":"Tim Duente, Max Pfeiffer, M. Rohs","doi":"10.1145/3098279.3098546","DOIUrl":"https://doi.org/10.1145/3098279.3098546","url":null,"abstract":"Electrical muscle stimulation (EMS) has been used successfully in HCI to generate force feedback and simple movements both in stationary and mobile settings. However, many natural limb movements require the coordinated actuation of multiple muscles. Off-the-shelf EMS devices are typically limited in their ability to generate fine-grained movements, because they only have a low number of channels and do not provide full control over the EMS parameters. More capable medical devices are not designed for mobile use or still have a lower number of channels and less control than is desirable for HCI research. In this paper we present the concept and a prototype of a 20-channel mobile EMS system that offers full control over the EMS parameters. We discuss the requirements of wearable multi-electrode EMS systems and present the design and technical evaluation of our prototype. We further outline several application scenarios and discuss safety and certification issues.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130823983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}