This work is aimed at assessing whether and how we can improve the level of web pages accessibility for dyslexic users, and to determine which new requisites could be added to the current ones proposed in the Italian Stanca Act in Italy, whose standards derive from the WCAG 2.0 guidelines for disabilities. In order to achieve this goal, we designed a test targeted to students diagnosed with dyslexia. Results showed that improvements may be reached taking into account a set of parameters not specifically considered by WCAG 2.0.
{"title":"Testing web-based solutions for improving reading tasks in students with dyslexia","authors":"Giulia Venturini, Cristina Gena","doi":"10.1145/3125571.3125573","DOIUrl":"https://doi.org/10.1145/3125571.3125573","url":null,"abstract":"This work is aimed at assessing whether and how we can improve the level of web pages accessibility for dyslexic users, and to determine which new requisites could be added to the current ones proposed in the Italian Stanca Act in Italy, whose standards derive from the WCAG 2.0 guidelines for disabilities. In order to achieve this goal, we designed a test targeted to students diagnosed with dyslexia. Results showed that improvements may be reached taking into account a set of parameters not specifically considered by WCAG 2.0.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133336097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. De Marsico, L. Ripamonti, Davide Gadia, D. Maggiorini, I. Mariani
The 1st Workshop on Games-Human Interaction (GHItaly '17) aims at bringing together scholars and industry practitioners to establish a common ground on the topic.
{"title":"GHItaly'17: 1st Workshop on Games-Human Interaction","authors":"M. De Marsico, L. Ripamonti, Davide Gadia, D. Maggiorini, I. Mariani","doi":"10.1145/3125571.3125578","DOIUrl":"https://doi.org/10.1145/3125571.3125578","url":null,"abstract":"The 1st Workshop on Games-Human Interaction (GHItaly '17) aims at bringing together scholars and industry practitioners to establish a common ground on the topic.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128751173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we analyze how interaction develops in multi-device environments by distinguishing two layers: an interaction in the large layer defines an interactive experience across different devices and locations, where roles and tasks evolve and intertwine; an interaction in the small layer defines the actions done and the interaction techniques used to execute a specific, self-contained task on a device. We present a notation to describe interaction in the large and demonstrate how it can be useful to understand the interaction layers perceived by the users during an interactive experience. We finally report about a user experiment in the context of a real application scenario to evaluate concretely which interaction layers the users observe and if they are able to recognize the boundaries theoretically identified through the notation.
{"title":"Interaction-in-the-large vs interaction-in-the-small in multi-device systems","authors":"A. Celentano, E. Dubois","doi":"10.1145/3125571.3125577","DOIUrl":"https://doi.org/10.1145/3125571.3125577","url":null,"abstract":"In this paper we analyze how interaction develops in multi-device environments by distinguishing two layers: an interaction in the large layer defines an interactive experience across different devices and locations, where roles and tasks evolve and intertwine; an interaction in the small layer defines the actions done and the interaction techniques used to execute a specific, self-contained task on a device. We present a notation to describe interaction in the large and demonstrate how it can be useful to understand the interaction layers perceived by the users during an interactive experience. We finally report about a user experiment in the context of a real application scenario to evaluate concretely which interaction layers the users observe and if they are able to recognize the boundaries theoretically identified through the notation.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124850297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This workshop follows the educational workshops held at the latest editions of the ACM CHItaly [1] and AVI [3] Conferences, for a further elaboration on the issues related to the relationships between HCI and education. The goal of the workshop is twofold: on one side, the purpose is investigating the methods of HCI in educational contexts where the discipline is the primary subject; on the other side, the purpose is investigating the role that the discipline can have for supporting education in a variety of contexts, starting from schools and moving to other traditional contexts such as museums and exhibitions, but also in novel situations where the focus is on public engagement.
{"title":"HCI and education in a changing world: from school to public engagement","authors":"Fabio Pittarello, G. Volpe, M. Zancanaro","doi":"10.1145/3125571.3125576","DOIUrl":"https://doi.org/10.1145/3125571.3125576","url":null,"abstract":"This workshop follows the educational workshops held at the latest editions of the ACM CHItaly [1] and AVI [3] Conferences, for a further elaboration on the issues related to the relationships between HCI and education. The goal of the workshop is twofold: on one side, the purpose is investigating the methods of HCI in educational contexts where the discipline is the primary subject; on the other side, the purpose is investigating the role that the discipline can have for supporting education in a variety of contexts, starting from schools and moving to other traditional contexts such as museums and exhibitions, but also in novel situations where the focus is on public engagement.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134322265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
George Margetis, S. Ntoa, M. Antona, C. Stephanidis
This paper presents a user experience study of interaction with printed maps for providing digitally augmented tourism information. The Interactive Maps system has been implemented based on an interactive printed matter framework which provides all the necessary components for developing smart applications that offer printed matter interaction, and has been deployed and evaluated in the context of the publicly available Tourism InfoPoint of the Municipality of Heraklion. The results of the evaluation highlight that interacting with digitally augmented paper is quite easy and natural, while the overall user experience is positive.
{"title":"Interacting with augmented paper maps: a user experience study","authors":"George Margetis, S. Ntoa, M. Antona, C. Stephanidis","doi":"10.1145/3125571.3125584","DOIUrl":"https://doi.org/10.1145/3125571.3125584","url":null,"abstract":"This paper presents a user experience study of interaction with printed maps for providing digitally augmented tourism information. The Interactive Maps system has been implemented based on an interactive printed matter framework which provides all the necessary components for developing smart applications that offer printed matter interaction, and has been deployed and evaluated in the context of the publicly available Tourism InfoPoint of the Municipality of Heraklion. The results of the evaluation highlight that interacting with digitally augmented paper is quite easy and natural, while the overall user experience is positive.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128898874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Larger smartphones have become increasingly commonplace, sometimes blurring the boundaries between phones and tablets. Most UI guidelines and usability studies are rarely updated and are still based on smaller screens or one-handed operations, which can be tiresome on large devices that may require different, or even two-handed postures. Past usability studies have successfully proposed crowd-sourced data collection systems, using mobile applications published on app stores. This work aims to investigate how similar systems can benefit from gamification elements in order to accelerate data collection and to produce accurate results over relatively short periods. An Android game is presented, which challenges users in short 30-second games, collecting performance of users operating the touchscreen using different device grips and postures. The preliminary analysis of the first 60.000 touch interactions is discussed and shown to be coherent with expected results.
{"title":"Gamification for crowdsourced data collection in mobile usability field studies","authors":"Silvia Malatini, L. Klopfenstein, A. Bogliolo","doi":"10.1145/3125571.3125597","DOIUrl":"https://doi.org/10.1145/3125571.3125597","url":null,"abstract":"Larger smartphones have become increasingly commonplace, sometimes blurring the boundaries between phones and tablets. Most UI guidelines and usability studies are rarely updated and are still based on smaller screens or one-handed operations, which can be tiresome on large devices that may require different, or even two-handed postures. Past usability studies have successfully proposed crowd-sourced data collection systems, using mobile applications published on app stores. This work aims to investigate how similar systems can benefit from gamification elements in order to accelerate data collection and to produce accurate results over relatively short periods. An Android game is presented, which challenges users in short 30-second games, collecting performance of users operating the touchscreen using different device grips and postures. The preliminary analysis of the first 60.000 touch interactions is discussed and shown to be coherent with expected results.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"333 2‐3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120840161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the proliferation of sensory technologies that do not only stimulate the sense of vision and hearing, but also our sense of touch, smell, and taste, we are confronted with the challenge of mastering those "new" senses in the design of interactive systems. To meaningfully design multisensory interfaces and enrich human-technology interactions we need to systematically investigate the technical, perceptual, and experiential parameters of sensory and multisensory stimulation. Here, I particularly focus on the study of tactile, gustatory, and olfactory experiences facilitated by the use of novel technologies (e.g., mid-air haptic devices, olfactory devices) and the combination of objective and subjective measures within sensory science, psychology, HCI, and user experience research.
{"title":"Mastering the Senses in HCI: Towards Multisensory Interfaces","authors":"Marianna Obrist","doi":"10.1145/3125571.3125603","DOIUrl":"https://doi.org/10.1145/3125571.3125603","url":null,"abstract":"With the proliferation of sensory technologies that do not only stimulate the sense of vision and hearing, but also our sense of touch, smell, and taste, we are confronted with the challenge of mastering those \"new\" senses in the design of interactive systems. To meaningfully design multisensory interfaces and enrich human-technology interactions we need to systematically investigate the technical, perceptual, and experiential parameters of sensory and multisensory stimulation. Here, I particularly focus on the study of tactile, gustatory, and olfactory experiences facilitated by the use of novel technologies (e.g., mid-air haptic devices, olfactory devices) and the combination of objective and subjective measures within sensory science, psychology, HCI, and user experience research.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114917688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Volpe, Ksenia Kolykhalova, Erica Volta, Simone Ghisio, G. Waddell, Paolo Alborno, Stefano Piana, C. Canepa, Rafael Ramírez-Meléndez
Learning to play a musical instrument is a difficult task, mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. Nevertheless, multimodal interactive systems can complement actual learning and teaching practice, by offering students guidance during self-study and by helping teachers and students to focus on details that would be otherwise difficult to appreciate from usual audiovisual recordings. This paper introduces a multimodal corpus consisting of the recordings of expert models of success, provided by four professional violin performers. The corpus is publicly available on the repoVizz platform, and includes synchronized audio, video, motion capture, and physiological (EMG) data. It represents the reference archive for the EU-H2020-ICT Project TELMI, an international research project investigating how we learn musical instruments from a pedagogical and scientific perspective and how to develop new interactive, assistive, self-learning, augmented-feedback, and social-aware systems to support musical instrument learning and teaching.
{"title":"A multimodal corpus for technology-enhanced learning of violin playing","authors":"G. Volpe, Ksenia Kolykhalova, Erica Volta, Simone Ghisio, G. Waddell, Paolo Alborno, Stefano Piana, C. Canepa, Rafael Ramírez-Meléndez","doi":"10.1145/3125571.3125588","DOIUrl":"https://doi.org/10.1145/3125571.3125588","url":null,"abstract":"Learning to play a musical instrument is a difficult task, mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. Nevertheless, multimodal interactive systems can complement actual learning and teaching practice, by offering students guidance during self-study and by helping teachers and students to focus on details that would be otherwise difficult to appreciate from usual audiovisual recordings. This paper introduces a multimodal corpus consisting of the recordings of expert models of success, provided by four professional violin performers. The corpus is publicly available on the repoVizz platform, and includes synchronized audio, video, motion capture, and physiological (EMG) data. It represents the reference archive for the EU-H2020-ICT Project TELMI, an international research project investigating how we learn musical instruments from a pedagogical and scientific perspective and how to develop new interactive, assistive, self-learning, augmented-feedback, and social-aware systems to support musical instrument learning and teaching.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123186770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudio Mazza, L. Ripamonti, D. Maggiorini, Davide Gadia
Procedural Content Generation is quite diffused in the field of video games design and development, since it can help in relieving designers from the burden of repetitive work, optimizing the development process, increasing re-playability, adapting games to specific audiences, and enabling new games mechanics. Anyway, when applying generative techniques, it is important not to forget that the main target is not optimization, but providing fun and compelling experiences to the player. In the present work, we tackle the issue of creating and testing an automated level editor for platform video games, starting from the work of [22,31]. The tool is aimed at producing levels that are both playable and fun, using as a starting point for the structure of the levels Afro-American musical rhythms. At the same time, it should guarantee maximum freedom to the level designer, and interactively suggest corrections functional to the quality of the player experience.
{"title":"FUN PLEdGE 2.0: a FUNny Platformers LEvels GEnerator (Rhythm Based)","authors":"Claudio Mazza, L. Ripamonti, D. Maggiorini, Davide Gadia","doi":"10.1145/3125571.3125592","DOIUrl":"https://doi.org/10.1145/3125571.3125592","url":null,"abstract":"Procedural Content Generation is quite diffused in the field of video games design and development, since it can help in relieving designers from the burden of repetitive work, optimizing the development process, increasing re-playability, adapting games to specific audiences, and enabling new games mechanics. Anyway, when applying generative techniques, it is important not to forget that the main target is not optimization, but providing fun and compelling experiences to the player. In the present work, we tackle the issue of creating and testing an automated level editor for platform video games, starting from the work of [22,31]. The tool is aimed at producing levels that are both playable and fun, using as a starting point for the structure of the levels Afro-American musical rhythms. At the same time, it should guarantee maximum freedom to the level designer, and interactively suggest corrections functional to the quality of the player experience.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127507517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the evaluation of two interaction modalities for Active Fashion, the first prototype of system designed for providing interactively information about dresses shown on mannequins in a shop window. Using the system the user may look at available sizes, colors, price and similar products. Due to the nature of such a system, the interaction must be touch-less and natural. The developed solutions use Microsoft Kinect 2 as a device. The first modality is based on gestures while the second one is based on gaze pointing. Evaluation results show that even if the interaction did not result completely satisfying from the control point of view, users prefer the gaze-based approach and felt positively engaged during the interaction.
{"title":"Evaluating Natural Interaction with a Shop Window","authors":"B. D. Carolis, Giuseppe Palestra","doi":"10.1145/3125571.3125601","DOIUrl":"https://doi.org/10.1145/3125571.3125601","url":null,"abstract":"This paper describes the evaluation of two interaction modalities for Active Fashion, the first prototype of system designed for providing interactively information about dresses shown on mannequins in a shop window. Using the system the user may look at available sizes, colors, price and similar products. Due to the nature of such a system, the interaction must be touch-less and natural. The developed solutions use Microsoft Kinect 2 as a device. The first modality is based on gestures while the second one is based on gaze pointing. Evaluation results show that even if the interaction did not result completely satisfying from the control point of view, users prefer the gaze-based approach and felt positively engaged during the interaction.","PeriodicalId":374214,"journal":{"name":"Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128362422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}