Steering time differs between narrowing and widening linear tunnels; a narrowing tunnel requires more time to navigate than a widening one. A prediction model, IDGap, for the time difference has recently been proposed, and it shows an excellent fit. However, the time difference in movement and model fitness were confirmed on a limited scale. The experiment used a 13.3-inch pen tablet, which required primarily wrist movements with a particular level of forearm extension. In this study, we tested the scale effects in the steering time difference between the two tunnel types. In our experiment, participants performed steering operations at five scales, from the entire 21.5-inch tablet area to its 1/12-scale size. The results always showed the time difference, and the conventional steering law did not show a good fit. IDGap improved the fitness, thereby confirming the validity of the model. The scale effects for the other results, including error rates and index of performance, are also discussed.
{"title":"Scale Effects in the Steering Time Difference between Narrowing and Widening Linear Tunnels","authors":"Shota Yamanaka, Homei Miyashita","doi":"10.1145/2971485.2971486","DOIUrl":"https://doi.org/10.1145/2971485.2971486","url":null,"abstract":"Steering time differs between narrowing and widening linear tunnels; a narrowing tunnel requires more time to navigate than a widening one. A prediction model, IDGap, for the time difference has recently been proposed, and it shows an excellent fit. However, the time difference in movement and model fitness were confirmed on a limited scale. The experiment used a 13.3-inch pen tablet, which required primarily wrist movements with a particular level of forearm extension. In this study, we tested the scale effects in the steering time difference between the two tunnel types. In our experiment, participants performed steering operations at five scales, from the entire 21.5-inch tablet area to its 1/12-scale size. The results always showed the time difference, and the conventional steering law did not show a good fit. IDGap improved the fitness, thereby confirming the validity of the model. The scale effects for the other results, including error rates and index of performance, are also discussed.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117241650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a study that sought to understand elite soccer children's use of visualizations to learn about, and improve their own sports performance. We specifically investigate how visualizations support the players' data comprehension. In this process, we design and evaluate visualizations based on real data. Finally, we discuss how the players' level of comprehension might depend on factors such as their general literacy and visualization literacy, and the role of visualization in coaching children.
{"title":"Designing Information Visualizations for Elite Soccer Children's Different Levels of Comprehension","authors":"Thor Herdal, Jeppe Gerner Pedersen, S. Knudsen","doi":"10.1145/2971485.2971546","DOIUrl":"https://doi.org/10.1145/2971485.2971546","url":null,"abstract":"We describe a study that sought to understand elite soccer children's use of visualizations to learn about, and improve their own sports performance. We specifically investigate how visualizations support the players' data comprehension. In this process, we design and evaluate visualizations based on real data. Finally, we discuss how the players' level of comprehension might depend on factors such as their general literacy and visualization literacy, and the role of visualization in coaching children.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115308278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elina Eriksson, D. Pargman, Oliver Bates, Maria Normark, J. Gulliksen, Mikael Anneroth, Johan Berndtsson
Despite increasing interest, Sustainable HCI has been critiqued for doing too little, and perhaps also at times for doing the wrong things. Still, a field like Human-Computer Interaction should aim at being part of transforming our society into a more sustainable one. But how do we do that, and, what are we aiming for? With this workshop, we propose that HCI should start working with the new global Sustainable Development Goals (SDG) that were formally adopted by the UN in September 2015. How can Sustainable HCI be inspired by, and contribute to these goals? What should we in the field of HCI do more of, and what should we perhaps do less of? In what areas should we form partnerships in order to reach the Sustainable Development Goals and with whom should we partner?
{"title":"HCI and UN's Sustainable Development Goals: Responsibilities, Barriers and Opportunities","authors":"Elina Eriksson, D. Pargman, Oliver Bates, Maria Normark, J. Gulliksen, Mikael Anneroth, Johan Berndtsson","doi":"10.1145/2971485.2987679","DOIUrl":"https://doi.org/10.1145/2971485.2987679","url":null,"abstract":"Despite increasing interest, Sustainable HCI has been critiqued for doing too little, and perhaps also at times for doing the wrong things. Still, a field like Human-Computer Interaction should aim at being part of transforming our society into a more sustainable one. But how do we do that, and, what are we aiming for? With this workshop, we propose that HCI should start working with the new global Sustainable Development Goals (SDG) that were formally adopted by the UN in September 2015. How can Sustainable HCI be inspired by, and contribute to these goals? What should we in the field of HCI do more of, and what should we perhaps do less of? In what areas should we form partnerships in order to reach the Sustainable Development Goals and with whom should we partner?","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125628158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile devices and apps have become an essential part of our daily life activities. Multi-touch gesture interaction directly on the touch screen is one of the most common ways to interact with mobile devices. However, in special circumstances (e.g., disabilities, wet hands, wearing heavy gloves outside in cold weather, etc.) it is difficult to interact directly on the touch screen. In this work, we focus on utilizing the 3D accelerometer sensor, available in most of the current mobile devices, as a way to provide an alternative set of gestures to the standard set of multi-touch gestures. We defined these 3D accelerometer-based gestures' definitions based on a user study and built an opens-source library, called 3DA-Gest, for providing the functionality to be used by mobile application developers. Further, we built a proof of concept map-based mobile app to check the working of our library. The preliminary conducted user study shows that users prefer to use our accelerometer-based gestures in special circumstances.
{"title":"3D Accelerometer-based Gestures for Interacting with Mobile Devices","authors":"S. Humayoun, Munir Ahmad, A. Ebert","doi":"10.1145/2971485.2996736","DOIUrl":"https://doi.org/10.1145/2971485.2996736","url":null,"abstract":"Mobile devices and apps have become an essential part of our daily life activities. Multi-touch gesture interaction directly on the touch screen is one of the most common ways to interact with mobile devices. However, in special circumstances (e.g., disabilities, wet hands, wearing heavy gloves outside in cold weather, etc.) it is difficult to interact directly on the touch screen. In this work, we focus on utilizing the 3D accelerometer sensor, available in most of the current mobile devices, as a way to provide an alternative set of gestures to the standard set of multi-touch gestures. We defined these 3D accelerometer-based gestures' definitions based on a user study and built an opens-source library, called 3DA-Gest, for providing the functionality to be used by mobile application developers. Further, we built a proof of concept map-based mobile app to check the working of our library. The preliminary conducted user study shows that users prefer to use our accelerometer-based gestures in special circumstances.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124141774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents TAB Sharing, a web-based and mobile platform for e-participation. With TAB Sharing citizens are empowered to create and share proposals with Public Administration (PA) actors and other citizens. They can submit a problem occurring in their community or an initiative that could be pursued, as well as a concrete and detailed description of a possible solution. Gamification elements have been included in TAB Sharing to foster participation. This makes the use of the application continuous over time and supports decision-making through the content exchanged. A user study with 20 citizens of two Italian municipalities has been carried out to compare a version of TAB Sharing without gamification and the one with gamification; the results of the study reported in the paper show the added value brought by gamification elements in the e-participation domain.
{"title":"Promoting Citizen Participation through Gamification","authors":"D. Bianchini, D. Fogli, D. Ragazzi","doi":"10.1145/2971485.2971543","DOIUrl":"https://doi.org/10.1145/2971485.2971543","url":null,"abstract":"This paper presents TAB Sharing, a web-based and mobile platform for e-participation. With TAB Sharing citizens are empowered to create and share proposals with Public Administration (PA) actors and other citizens. They can submit a problem occurring in their community or an initiative that could be pursued, as well as a concrete and detailed description of a possible solution. Gamification elements have been included in TAB Sharing to foster participation. This makes the use of the application continuous over time and supports decision-making through the content exchanged. A user study with 20 citizens of two Italian municipalities has been carried out to compare a version of TAB Sharing without gamification and the one with gamification; the results of the study reported in the paper show the added value brought by gamification elements in the e-participation domain.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123705099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, four interactional modes of pervasive affective sensing are identified: in situ intentional, retrospective, automatic, and reconstructive. These modes are used to discuss and highlight the challenges of designing pervasive affective sensing systems for mental health care applications. We also present the design of the Grasp platform, which consists of a hand-held, tangible stone-like object with accompanying peripherals. This device is equipped with a force sensor that registers squeezes, includes capabilities for wireless transmission of data, and comes with a crib for initiating the wireless connection and data transfer. In addition, the platform includes an app on a tablet that can render squeezes in real time or visualize the data from a given time period. In this paper, we focus mainly on the design of the tangible interaction and address the challenges of designing for in situ tangible affective interaction.
{"title":"Designing for Tangible Affective Interaction","authors":"Frode Guribye, Tor Gjosater, Christian Bjartli","doi":"10.1145/2971485.2971547","DOIUrl":"https://doi.org/10.1145/2971485.2971547","url":null,"abstract":"In this paper, four interactional modes of pervasive affective sensing are identified: in situ intentional, retrospective, automatic, and reconstructive. These modes are used to discuss and highlight the challenges of designing pervasive affective sensing systems for mental health care applications. We also present the design of the Grasp platform, which consists of a hand-held, tangible stone-like object with accompanying peripherals. This device is equipped with a force sensor that registers squeezes, includes capabilities for wireless transmission of data, and comes with a crib for initiating the wireless connection and data transfer. In addition, the platform includes an app on a tablet that can render squeezes in real time or visualize the data from a given time period. In this paper, we focus mainly on the design of the tangible interaction and address the challenges of designing for in situ tangible affective interaction.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123753068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work in progress paper explores users' perceptions of the aesthetics of interaction. We describe a qualitative study using the Repertory Grid Technique (RGT) that elicited individuals' personal constructs, bipolar adjectives such as beautiful vs ugly that characterize individuals' idiosyncratic ways of classifying and differentiating between a set of stimuli. The constructs were sorted by similarity, resulting in a set of aesthetic categories. Quantitative data (i.e., participants' ratings) from the RGT further enables us to assess the internal consistency of the emerging categories as well as to chart the design space of aesthetic interactions. All in all, 23 categories of aesthetics of interaction were established based on users' perceptions. These categories partially corroborated (e.g., speed, proximity, complexity) but also expanded (e.g., natural realism, congruence, dimensionality) prior work on experience qualities in Human-Computer Interaction (HCI).
{"title":"Understanding Aesthetics of Interaction: A Repertory Grid Study","authors":"Mati Mõttus, E. Karapanos, D. Lamas, G. Cockton","doi":"10.1145/2971485.2996755","DOIUrl":"https://doi.org/10.1145/2971485.2996755","url":null,"abstract":"This work in progress paper explores users' perceptions of the aesthetics of interaction. We describe a qualitative study using the Repertory Grid Technique (RGT) that elicited individuals' personal constructs, bipolar adjectives such as beautiful vs ugly that characterize individuals' idiosyncratic ways of classifying and differentiating between a set of stimuli. The constructs were sorted by similarity, resulting in a set of aesthetic categories. Quantitative data (i.e., participants' ratings) from the RGT further enables us to assess the internal consistency of the emerging categories as well as to chart the design space of aesthetic interactions. All in all, 23 categories of aesthetics of interaction were established based on users' perceptions. These categories partially corroborated (e.g., speed, proximity, complexity) but also expanded (e.g., natural realism, congruence, dimensionality) prior work on experience qualities in Human-Computer Interaction (HCI).","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121814030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khanh-Duy Le, Kening Zhu, Tomasz Kosinski, M. Fjeld, Maryam Aj, Shengdong Zhao
While most mobile platforms offer motion sensing as input for creating tactile feedback, it is still hard to design such feedback patterns while the screen becomes larger, e.g. tabletop surfaces. This demonstration presents Ubitile, a finger-worn concept offering both motion sensing and vibration feedback for authoring of vibrotactile feedback on tabletops. We suggest the mid-air motion input space made accessible using Ubitile outperforms current GUI-based visual input techniques for designing tactile feedback. Additionally Ubitile offers a hands-free input space for the tactile output. Ubitile integrates both input and output spaces within a single wearable interface, jointly affording spatial authoring/editing and active tactile feedback on- and above- tabletops.
{"title":"Ubitile: A Finger-Worn I/O Device for Tabletop Vibrotactile Pattern Authoring","authors":"Khanh-Duy Le, Kening Zhu, Tomasz Kosinski, M. Fjeld, Maryam Aj, Shengdong Zhao","doi":"10.1145/2971485.2996721","DOIUrl":"https://doi.org/10.1145/2971485.2996721","url":null,"abstract":"While most mobile platforms offer motion sensing as input for creating tactile feedback, it is still hard to design such feedback patterns while the screen becomes larger, e.g. tabletop surfaces. This demonstration presents Ubitile, a finger-worn concept offering both motion sensing and vibration feedback for authoring of vibrotactile feedback on tabletops. We suggest the mid-air motion input space made accessible using Ubitile outperforms current GUI-based visual input techniques for designing tactile feedback. Additionally Ubitile offers a hands-free input space for the tactile output. Ubitile integrates both input and output spaces within a single wearable interface, jointly affording spatial authoring/editing and active tactile feedback on- and above- tabletops.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121556515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aino Ahtinen, Eeva Andrejeff, Maiju Vuolle, Kaisa Väänänen
People's sedentary lifestyle is connected with serious health threats. The goal of our research is to gain novel insights on ways in which movement during knowledge work can be increased. We propose and study mobile technology mediated walking meetings. In this paper we present the results of a design research project with a two-phase qualitative user study, in which we first explored users' expectations towards walking meetings (N=15) and designed the Walking metro mobile application concept. We then evaluated user experience of the concept in field tests (N=14). Based on the findings, we propose 10 design implications for mobile walking meetings in three categories: designing for acceptability, non-interrupting guidance, and discreet persuasion and stimulation.
{"title":"Walk as You Work: User Study and Design Implications for Mobile Walking Meetings","authors":"Aino Ahtinen, Eeva Andrejeff, Maiju Vuolle, Kaisa Väänänen","doi":"10.1145/2971485.2971510","DOIUrl":"https://doi.org/10.1145/2971485.2971510","url":null,"abstract":"People's sedentary lifestyle is connected with serious health threats. The goal of our research is to gain novel insights on ways in which movement during knowledge work can be increased. We propose and study mobile technology mediated walking meetings. In this paper we present the results of a design research project with a two-phase qualitative user study, in which we first explored users' expectations towards walking meetings (N=15) and designed the Walking metro mobile application concept. We then evaluated user experience of the concept in field tests (N=14). Based on the findings, we propose 10 design implications for mobile walking meetings in three categories: designing for acceptability, non-interrupting guidance, and discreet persuasion and stimulation.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128366899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze.
{"title":"Towards Using Gaze Properties to Detect Language Proficiency","authors":"Jakob Karolus, Paweł W. Woźniak, L. Chuang","doi":"10.1145/2971485.2996753","DOIUrl":"https://doi.org/10.1145/2971485.2996753","url":null,"abstract":"Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze.","PeriodicalId":190768,"journal":{"name":"Proceedings of the 9th Nordic Conference on Human-Computer Interaction","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132915447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}