M. Toivanen, K. Puolamäki, Kristian Lukander, J. Häkkinen, J. Radun
Gaze tracking in psychological, cognitive, and user interaction studies has recently evolved toward mobile solutions, as they enable direct assessing of users' visual attention in natural environments, and augmented and virtual reality (AR/VR) applications. Productive approaches in analyzing and predicting user actions with gaze data require a multidisciplinary approach with experts in cognitive and behavioral sciences, machine vision, and machine learning. This workshop brings together a cross-domain group of individuals to (i) discuss and contribute to the problem of using mobile gaze tracking for inferring user action, (ii) advance the sharing of data and analysis algorithms as well as device solutions, and (iii) increase understanding of behavioral aspects of gaze-action sequences in natural environments and AR/VR applications.
{"title":"Inferring user action with mobile gaze tracking","authors":"M. Toivanen, K. Puolamäki, Kristian Lukander, J. Häkkinen, J. Radun","doi":"10.1145/2957265.2965016","DOIUrl":"https://doi.org/10.1145/2957265.2965016","url":null,"abstract":"Gaze tracking in psychological, cognitive, and user interaction studies has recently evolved toward mobile solutions, as they enable direct assessing of users' visual attention in natural environments, and augmented and virtual reality (AR/VR) applications. Productive approaches in analyzing and predicting user actions with gaze data require a multidisciplinary approach with experts in cognitive and behavioral sciences, machine vision, and machine learning. This workshop brings together a cross-domain group of individuals to (i) discuss and contribute to the problem of using mobile gaze tracking for inferring user action, (ii) advance the sharing of data and analysis algorithms as well as device solutions, and (iii) increase understanding of behavioral aspects of gaze-action sequences in natural environments and AR/VR applications.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132050943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emoji provide a way to express nonverbal conversational cues in computer-mediated communication. However, people need to share the same understanding of what each emoji symbolises, otherwise communication can breakdown. We surveyed 436 people about their use of emoji and ran an interactive study using a two-dimensional emotion space to investigate (1) the variation in people's interpretation of emoji and (2) their interpretation of corresponding Android and iOS emoji. Our results show variations between people's ratings within and across platforms. We outline our solution to reduce misunderstandings that arise from different interpretations of emoji.
{"title":"Oh that's what you meant!: reducing emoji misunderstanding","authors":"Garreth W. Tigwell, David R. Flatla","doi":"10.1145/2957265.2961844","DOIUrl":"https://doi.org/10.1145/2957265.2961844","url":null,"abstract":"Emoji provide a way to express nonverbal conversational cues in computer-mediated communication. However, people need to share the same understanding of what each emoji symbolises, otherwise communication can breakdown. We surveyed 436 people about their use of emoji and ran an interactive study using a two-dimensional emotion space to investigate (1) the variation in people's interpretation of emoji and (2) their interpretation of corresponding Android and iOS emoji. Our results show variations between people's ratings within and across platforms. We outline our solution to reduce misunderstandings that arise from different interpretations of emoji.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134268721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Chamberlain, Mads Bødker, Adrian Hazzard, S. Benford
Audio-based content, location and mobile technologies can offer a multitude of interactional possibilities when combined in innovative and creative ways. It is important not to underestimate impact of the interplay between location, place and sound. Even if intangible and ephemeral, sounds impact upon the way in which we experience the built as well as the natural world. As technology offer us the opportunity to augment and access the world, mobile technologies offer us the opportunity to interact while moving though the world. They are technologies that can mediate, provide and locate experience in the world. Vision, and to some extent the tactile senses have been dominant modalities discussed in experiential terms within HCI. This workshop suggests that there is a need to better understand how sound can be used for shaping and augmenting the experiential qualities of places through mobile computing.
{"title":"Audio in place: media, mobility & HCI - creating meaning in space","authors":"A. Chamberlain, Mads Bødker, Adrian Hazzard, S. Benford","doi":"10.1145/2957265.2964195","DOIUrl":"https://doi.org/10.1145/2957265.2964195","url":null,"abstract":"Audio-based content, location and mobile technologies can offer a multitude of interactional possibilities when combined in innovative and creative ways. It is important not to underestimate impact of the interplay between location, place and sound. Even if intangible and ephemeral, sounds impact upon the way in which we experience the built as well as the natural world. As technology offer us the opportunity to augment and access the world, mobile technologies offer us the opportunity to interact while moving though the world. They are technologies that can mediate, provide and locate experience in the world. Vision, and to some extent the tactile senses have been dominant modalities discussed in experiential terms within HCI. This workshop suggests that there is a need to better understand how sound can be used for shaping and augmenting the experiential qualities of places through mobile computing.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134273323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a prevailing sentiment in popular culture that we have become too attached to our phones. Smartphone notifications play a critical role in drawing people's attention to their phones. As user experience researchers on the Android team at Google, we used an ethnographic approach to understand how people experience smartphone notifications. We conducted an ethnographic study of smartphone users in New York City, while engaging members of our product team (including product managers, engineers, and designers) in the data collection and analysis. In this case study, we describe our research methods, what we learned about notifications' role in people's lives, and discuss the impact that our research has had on various product teams at Google.
{"title":"I'm just trying to survive: an ethnographic look at mobile notifications and attention management","authors":"Julieta Aranda, Noor F. Ali-Hasan, S. Baig","doi":"10.1145/2957265.2957274","DOIUrl":"https://doi.org/10.1145/2957265.2957274","url":null,"abstract":"There is a prevailing sentiment in popular culture that we have become too attached to our phones. Smartphone notifications play a critical role in drawing people's attention to their phones. As user experience researchers on the Android team at Google, we used an ethnographic approach to understand how people experience smartphone notifications. We conducted an ethnographic study of smartphone users in New York City, while engaging members of our product team (including product managers, engineers, and designers) in the data collection and analysis. In this case study, we describe our research methods, what we learned about notifications' role in people's lives, and discuss the impact that our research has had on various product teams at Google.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"51 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133304768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarthak Ghosh, Pratik Shah, L. Navarro, Xiaowei Chen
In this paper, we investigate the potential of using musical feedback to enhance the skateboarding experience and to encourage skaters to gain more skills. We adhere to the UCD (User-Centered Design) process to discover opportunities for technological contributions in skating, followed by proposing MusiSkate as a solution based on user needs and contexts. Our findings suggest that MusiSkate has the potential to enhance the satisfaction of skating. Furthermore, it conforms to the guidelines for designing skateboarding applications as set forth by existing literature. Finally, we suggest future explorations for using audio feedback with skateboarding based on the results of our pilot study.
{"title":"MusiSkate: enhancing the skateboarding experience through musical feedback","authors":"Sarthak Ghosh, Pratik Shah, L. Navarro, Xiaowei Chen","doi":"10.1145/2957265.2961854","DOIUrl":"https://doi.org/10.1145/2957265.2961854","url":null,"abstract":"In this paper, we investigate the potential of using musical feedback to enhance the skateboarding experience and to encourage skaters to gain more skills. We adhere to the UCD (User-Centered Design) process to discover opportunities for technological contributions in skating, followed by proposing MusiSkate as a solution based on user needs and contexts. Our findings suggest that MusiSkate has the potential to enhance the satisfaction of skating. Furthermore, it conforms to the guidelines for designing skateboarding applications as set forth by existing literature. Finally, we suggest future explorations for using audio feedback with skateboarding based on the results of our pilot study.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129371878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Al-Naser, P. Lanzer, A. Dengel, S. S. Bukhari, Seyyed Saleh Mozaffari Chanijani
Minimally invasive catheter-mediated (MIC) interventions represent a key approach to treat patients with a wide range of cardiovascular diseases; the operators' performance rely on his or her ability to read the dynamic (cine, fluoroscopy) and static x-rays images rapidly, and accurately. Here, we demonstrate the feasibility of expertise transfer employing a low cost eye tracking system for experts gaze visualization in real-life (MIC) interventional scenario. As the video quality from head-mounted eye tracker is not sufficient for data analysis, due to head-movement, dark shades, blurring, etc., therefore we have developed an automatic method for mapping the recorded gaze from the eye-tracker video to high quality x-ray video, allowing for tracking of the complete visual perception of individual operators throughout the life performance of individual interventions based on high resolution image recordings. The high quality gaze video from an expert doctors provide an important educational resource to teach novices how to read the dynamic x-ray images.
{"title":"Knowledge transfer from experts to novices in minimally invasive catheter-mediated (MIC) interventions, eye-tracking study","authors":"Mohammad Al-Naser, P. Lanzer, A. Dengel, S. S. Bukhari, Seyyed Saleh Mozaffari Chanijani","doi":"10.1145/2957265.2965013","DOIUrl":"https://doi.org/10.1145/2957265.2965013","url":null,"abstract":"Minimally invasive catheter-mediated (MIC) interventions represent a key approach to treat patients with a wide range of cardiovascular diseases; the operators' performance rely on his or her ability to read the dynamic (cine, fluoroscopy) and static x-rays images rapidly, and accurately. Here, we demonstrate the feasibility of expertise transfer employing a low cost eye tracking system for experts gaze visualization in real-life (MIC) interventional scenario. As the video quality from head-mounted eye tracker is not sufficient for data analysis, due to head-movement, dark shades, blurring, etc., therefore we have developed an automatic method for mapping the recorded gaze from the eye-tracker video to high quality x-ray video, allowing for tracking of the complete visual perception of individual operators throughout the life performance of individual interventions based on high resolution image recordings. The high quality gaze video from an expert doctors provide an important educational resource to teach novices how to read the dynamic x-ray images.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131040425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Min-Chieh Hsiu, Da-Yuan Huang, Chi An Chen, Yu-Chih Lin, Y. Hung, De-Nian Yang, Mike Y. Chen
Various typing methods of qwerty-based keyboards on smartwatches have been proposed in recent years. However, since each key can only occupy limited input space and our fingers are too big, recent solutions are mainly two-step typing methods. Users have to navigate the desired key on an enlarged keyboard, and then select the target. The two-step process is distant from our physical keyboard experiences, and requires users to frequently change the keyboard layouts. The aim of this paper is to propose a single-step typing technique that allows users to key in a character with a single touch. We introduce ForceBoard, which combines two adjacent keys into one region and uses force as selecting mechanism. By using that, it not only provides more precise selection, but also allows users to type texts without changing the visual contents of keyboard. We conducted a study comparing the performance of ForceBoard with other two state-of-the-art two-step methods, ZoomBoard and SplitBoard. Our results showed that ForceBoard outperformed ZoomBoard significantly with 30.52% on average, and was slightly better than SplitBoard. Furthermore, ForceBoard also received higher preferences on text speed and satisfaction.
{"title":"ForceBoard: using force as input technique on size-limited soft keyboard","authors":"Min-Chieh Hsiu, Da-Yuan Huang, Chi An Chen, Yu-Chih Lin, Y. Hung, De-Nian Yang, Mike Y. Chen","doi":"10.1145/2957265.2961827","DOIUrl":"https://doi.org/10.1145/2957265.2961827","url":null,"abstract":"Various typing methods of qwerty-based keyboards on smartwatches have been proposed in recent years. However, since each key can only occupy limited input space and our fingers are too big, recent solutions are mainly two-step typing methods. Users have to navigate the desired key on an enlarged keyboard, and then select the target. The two-step process is distant from our physical keyboard experiences, and requires users to frequently change the keyboard layouts. The aim of this paper is to propose a single-step typing technique that allows users to key in a character with a single touch. We introduce ForceBoard, which combines two adjacent keys into one region and uses force as selecting mechanism. By using that, it not only provides more precise selection, but also allows users to type texts without changing the visual contents of keyboard. We conducted a study comparing the performance of ForceBoard with other two state-of-the-art two-step methods, ZoomBoard and SplitBoard. Our results showed that ForceBoard outperformed ZoomBoard significantly with 30.52% on average, and was slightly better than SplitBoard. Furthermore, ForceBoard also received higher preferences on text speed and satisfaction.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115173113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of Building Information Modelling (BIM) provides geometry data that can be easily used for visualisations. We present six demonstrators made from the same data using similar workflows. They cover different categories of mobile devices, ranging from head-mounted displays to smartphones and tablets with inside-out positional tracking. They showcase cross-media visualisations depending on the device capabilities, which vary from a sophisticated car-based AR-setup, over wired and wireless VR, to see-through AR on smart glasses, and video-based AR on tablets.
{"title":"Mobile cross-media visualisations made from building information modelling data","authors":"L. Oppermann, Marius Shekow, Deniz Bicer","doi":"10.1145/2957265.2961852","DOIUrl":"https://doi.org/10.1145/2957265.2961852","url":null,"abstract":"The advent of Building Information Modelling (BIM) provides geometry data that can be easily used for visualisations. We present six demonstrators made from the same data using similar workflows. They cover different categories of mobile devices, ranging from head-mounted displays to smartphones and tablets with inside-out positional tracking. They showcase cross-media visualisations depending on the device capabilities, which vary from a sophisticated car-based AR-setup, over wired and wireless VR, to see-through AR on smart glasses, and video-based AR on tablets.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"30 48","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113934433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When a user presses physical hardware volume keys on a smartphone, volume controls appears on the screen. Traditionally, these volume controls are represented by a single slider. Android's Lollipop release introduced new functionality into the volume controls: a way to temporarily silence interruptions from notifications and phone calls. After the launch of Android Lollipop, we discovered some usability issues with the more robust volume controls interface. We decided to address these issues in Android's next release, Marshmallow. In this case study, we describe the trade-offs we faced in designing Android Marshmallow volume controls and how our interdisciplinary user experience team designed a longitudinal study that helped us evaluate two designs we were considering. We also describe the impact of our research approach and how we arrived at a final design.
{"title":"Designing Android Marshmallow volume controls: a user experience case study","authors":"Noor F. Ali-Hasan, Rachel Garb, Mindy Pereira","doi":"10.1145/2957265.2957273","DOIUrl":"https://doi.org/10.1145/2957265.2957273","url":null,"abstract":"When a user presses physical hardware volume keys on a smartphone, volume controls appears on the screen. Traditionally, these volume controls are represented by a single slider. Android's Lollipop release introduced new functionality into the volume controls: a way to temporarily silence interruptions from notifications and phone calls. After the launch of Android Lollipop, we discovered some usability issues with the more robust volume controls interface. We decided to address these issues in Android's next release, Marshmallow. In this case study, we describe the trade-offs we faced in designing Android Marshmallow volume controls and how our interdisciplinary user experience team designed a longitudinal study that helped us evaluate two designs we were considering. We also describe the impact of our research approach and how we arrived at a final design.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115882795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wesley Wang, K. Singh, Yan Ting Mandy Chu, A. Huber
In recent years, there has been a rise in the use of virtual reality (VR) both in specialized fields and commercial settings. Modern applications of VR include games, films, education, arts, and healthcare, etc. Today, VR applications exist beyond expensive research labs; they are being employed to solve real world problems. To explore a new practical application of VR, we designed and prototyped a work-in-progress VR mobile app of common biking incidents in the form of a choose-your-own-adventure game. Our goal is to teach people about bicycle safety in cities, and to foster empathy within the driving community towards cyclists.
{"title":"Educating bicycle safety and fostering empathy for cyclists with an affordable and game-based VR app","authors":"Wesley Wang, K. Singh, Yan Ting Mandy Chu, A. Huber","doi":"10.1145/2957265.2961846","DOIUrl":"https://doi.org/10.1145/2957265.2961846","url":null,"abstract":"In recent years, there has been a rise in the use of virtual reality (VR) both in specialized fields and commercial settings. Modern applications of VR include games, films, education, arts, and healthcare, etc. Today, VR applications exist beyond expensive research labs; they are being employed to solve real world problems. To explore a new practical application of VR, we designed and prototyped a work-in-progress VR mobile app of common biking incidents in the form of a choose-your-own-adventure game. Our goal is to teach people about bicycle safety in cities, and to foster empathy within the driving community towards cyclists.","PeriodicalId":131157,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123649349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}