Assistive technologies, mostly based on speech recognition and synthesis, help visually-impaired people writing text on digital devices. However, they do not fully support non-sequential text editing without the use of sight. This paper discusses the design of the interaction protocol underlying the first prototype of a text editor that is especially designed for people with very poor eyesight. It does not require the visual localization of text for non-sequential editing of multiple-paragraph documents and only exploits voice and ”uninterpreted” keyboard input, namely the outmoded ”press any key” for mode-switching. Preliminary tests complete the paper.
{"title":"VoiceWriting: a completely speech-based text editor","authors":"M. De Marsico, Francesca Romana Mattei","doi":"10.1145/3464385.3464735","DOIUrl":"https://doi.org/10.1145/3464385.3464735","url":null,"abstract":"Assistive technologies, mostly based on speech recognition and synthesis, help visually-impaired people writing text on digital devices. However, they do not fully support non-sequential text editing without the use of sight. This paper discusses the design of the interaction protocol underlying the first prototype of a text editor that is especially designed for people with very poor eyesight. It does not require the visual localization of text for non-sequential editing of multiple-paragraph documents and only exploits voice and ”uninterpreted” keyboard input, namely the outmoded ”press any key” for mode-switching. Preliminary tests complete the paper.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128693553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decision support systems based on AI are usually designed to generate complete outputs entirely automatically and to explain those to users. However, explanations, no matter how well designed, might not adequately address the output uncertainty of such systems in many applications. This is especially the case when the human-out-of-the-loop problem persists, which is a fundamental human limitation. There is no reason to limit decision support systems to such backward reasoning designs, though. We argue how more interactive forward reasoning designs where users are actively involved in the task can be effective in managing output uncertainty. We therefore call for a more complete view of the design space for decision support systems that includes both backward and forward reasoning designs. We argue that such a more complete view is necessary to overcome the barriers that hinder AI deployment especially in high-stakes applications.
{"title":"Forward Reasoning Decision Support: Toward a More Complete View of the Human-AI Interaction Design Space","authors":"Z. Zhang, Yuanting Liu, H. Hussmann","doi":"10.1145/3464385.3464696","DOIUrl":"https://doi.org/10.1145/3464385.3464696","url":null,"abstract":"Decision support systems based on AI are usually designed to generate complete outputs entirely automatically and to explain those to users. However, explanations, no matter how well designed, might not adequately address the output uncertainty of such systems in many applications. This is especially the case when the human-out-of-the-loop problem persists, which is a fundamental human limitation. There is no reason to limit decision support systems to such backward reasoning designs, though. We argue how more interactive forward reasoning designs where users are actively involved in the task can be effective in managing output uncertainty. We therefore call for a more complete view of the design space for decision support systems that includes both backward and forward reasoning designs. We argue that such a more complete view is necessary to overcome the barriers that hinder AI deployment especially in high-stakes applications.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130627279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart things, such as smart watches and smart-door bells, are becoming part of our daily life. Artists are usually inexperienced of how smart things are designed. Experiencing the design of smart things can trigger their imagination and help them produce novel artwork. This paper reports on workshops with five artists, without prior experience of smart-thing design. At the core of the workshops were the ideation tools of IoTgo, which is a smart-thing design toolkit. The first workshop served to use and co-create parts of the ideation tools of IoTgo for and with artists. The second workshop enabled artists to use the tools so as to ideate novel smart-artwork things, and engage again in the evolution of IoTgo. Results of the workshops are reflected over, and useful lessons are distilled in relation to design toolkits for artists.
{"title":"At the Frontiers of Art and IoT: the IoTgo Toolkit as a Probe for Artists","authors":"R. Gennari, Mehdi Rizvi","doi":"10.1145/3464385.3464727","DOIUrl":"https://doi.org/10.1145/3464385.3464727","url":null,"abstract":"Smart things, such as smart watches and smart-door bells, are becoming part of our daily life. Artists are usually inexperienced of how smart things are designed. Experiencing the design of smart things can trigger their imagination and help them produce novel artwork. This paper reports on workshops with five artists, without prior experience of smart-thing design. At the core of the workshops were the ideation tools of IoTgo, which is a smart-thing design toolkit. The first workshop served to use and co-create parts of the ideation tools of IoTgo for and with artists. The second workshop enabled artists to use the tools so as to ideate novel smart-artwork things, and engage again in the evolution of IoTgo. Results of the workshops are reflected over, and useful lessons are distilled in relation to design toolkits for artists.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114844089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research describes a new design method – the Wearable Mapping Suit – which combines bodystorming and prototyping techniques in a human-centred development process for wearable technologies. The method derives from visually prepared body maps that are used in designerly decision-making processes. The idea is illustrated with an exemplified workshop where four visual Identification (ID) Wearables used for authentication interactions get co-created by six participants and two facilitators.
{"title":"Wearable Mapping Suit: Body Mapping for Identification Wearables","authors":"Friederike Fröbel, Marie Beuthel, Gesche Joost","doi":"10.1145/3464385.3464729","DOIUrl":"https://doi.org/10.1145/3464385.3464729","url":null,"abstract":"This research describes a new design method – the Wearable Mapping Suit – which combines bodystorming and prototyping techniques in a human-centred development process for wearable technologies. The method derives from visually prepared body maps that are used in designerly decision-making processes. The idea is illustrated with an exemplified workshop where four visual Identification (ID) Wearables used for authentication interactions get co-created by six participants and two facilitators.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122142577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Unbehaun, V. Wulf, Johannes Schädler, M. Lewkowicz, C. Bassetti, M. Ackerman
Rural regions in the EU and all over the world are often characterized by divers’ conditions and aspects, such as - geographical, landscape, digital infrastructures, socio-economic, demographic, cultural and environmental as well as hierarchically grown decision structures and dense social networks among their inhabitants. Digitalization and improving quality of live in rural and industrialized regions is a transformative, yet complex process, that depends inherently on the ability of regions to face challenges in modernizing their industrial base, upgrading the skills of the workforce, compensating for job losses in key sectors, enhance well-being and living standards and improve their contribution to national performance and more inclusive and resilient societies. With this workshop, we aim at contributing to this growing field by sharing experiences, identifying interdisciplinary perspectives about regions in industrial and digital transition to become more resilient in the context of major shifts brought about by globalization, decarbonization and ongoing technological change.
{"title":"The Role of Digitalization in Improving the Quality of Live in Rural (Industrialized) Regions","authors":"David Unbehaun, V. Wulf, Johannes Schädler, M. Lewkowicz, C. Bassetti, M. Ackerman","doi":"10.1145/3464385.3467686","DOIUrl":"https://doi.org/10.1145/3464385.3467686","url":null,"abstract":"Rural regions in the EU and all over the world are often characterized by divers’ conditions and aspects, such as - geographical, landscape, digital infrastructures, socio-economic, demographic, cultural and environmental as well as hierarchically grown decision structures and dense social networks among their inhabitants. Digitalization and improving quality of live in rural and industrialized regions is a transformative, yet complex process, that depends inherently on the ability of regions to face challenges in modernizing their industrial base, upgrading the skills of the workforce, compensating for job losses in key sectors, enhance well-being and living standards and improve their contribution to national performance and more inclusive and resilient societies. With this workshop, we aim at contributing to this growing field by sharing experiences, identifying interdisciplinary perspectives about regions in industrial and digital transition to become more resilient in the context of major shifts brought about by globalization, decarbonization and ongoing technological change.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117313988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The workshop “VR-ISLAND: Virtual Reality, Inclusion and Special Language Needs” is about the application of Virtual Reality (VR) and Augmented Reality (AR) tools and environments to inclusive language teaching. In fact, all the contributions included in the workshop speculate upon language teaching to students with Special Language Needs (SLN), that is students with language differences, language disorders, cognitive and sensorial disabilities, specific and diverse socio-linguistic backgrounds. Based on the discussion, VR and AR technologies have proved to have a positive impact on language inclusion because they increase students’ engagement and motivation, enhance certain language skills, reduce anxiety and affective filter.
{"title":"VR-ISLAND: Virtual Reality, Inclusion and Special Language Needs","authors":"Giulia Staggini, Rita Cersosimo","doi":"10.1145/3464385.3467474","DOIUrl":"https://doi.org/10.1145/3464385.3467474","url":null,"abstract":"The workshop “VR-ISLAND: Virtual Reality, Inclusion and Special Language Needs” is about the application of Virtual Reality (VR) and Augmented Reality (AR) tools and environments to inclusive language teaching. In fact, all the contributions included in the workshop speculate upon language teaching to students with Special Language Needs (SLN), that is students with language differences, language disorders, cognitive and sensorial disabilities, specific and diverse socio-linguistic backgrounds. Based on the discussion, VR and AR technologies have proved to have a positive impact on language inclusion because they increase students’ engagement and motivation, enhance certain language skills, reduce anxiety and affective filter.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115345251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Depression is the most common mental disorder in Italian population. The Beck Depression Inventory-II (BDI-II) questionnaire is generally adopted by clinicians as indicator of depression. In this paper we propose LieToMe, a mobile application aiming at collecting together with the BDI-II patient’s answers also her images and voice. Emotional analysis is performed on the video and on the images by using deep Convolutional Neural Network. The correlation between the emotional results and the patient’s DBI-II scores is provided to the clinician as indication of the relationship between the patient’s emotional state and the result of the depression screening. We conduct a preliminary evaluation aiming at assessing the clinician satisfaction on the support offered by LieToMe . Participants were 8 clinicians. Results seem to support the acceptability of LieToMe in the clinical practice.
{"title":"Supporting Depression Screening with Multimodal Emotion Detection","authors":"R. Francese, Pasquale Attanasio","doi":"10.1145/3464385.3464708","DOIUrl":"https://doi.org/10.1145/3464385.3464708","url":null,"abstract":"Depression is the most common mental disorder in Italian population. The Beck Depression Inventory-II (BDI-II) questionnaire is generally adopted by clinicians as indicator of depression. In this paper we propose LieToMe, a mobile application aiming at collecting together with the BDI-II patient’s answers also her images and voice. Emotional analysis is performed on the video and on the images by using deep Convolutional Neural Network. The correlation between the emotional results and the patient’s DBI-II scores is provided to the clinician as indication of the relationship between the patient’s emotional state and the result of the depression screening. We conduct a preliminary evaluation aiming at assessing the clinician satisfaction on the support offered by LieToMe . Participants were 8 clinicians. Results seem to support the acceptability of LieToMe in the clinical practice.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122691974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People communicate emotions through several nonverbal channels and facial expressions play an important part in this communicative process. Automatic Facial Expression Recognition (FER) is a very hot topic that has attracted a lot of interest in the last years. Most FER systems try to recognize emotions from the entire face of a person. Unfortunately, due to pandemic situation, people wear a mask most of the time, thus their faces are not fully visible. In our study, we investigate the effectiveness of a FER system in recognizing emotions only from the eyes region, which is the sole visible region when wearing a mask by comparing the results of the same approach when applied to the entire face. The proposed pipeline involves several steps: detecting a face in an image, detecting a mask on a face, extracting the eyes region, and recognize the emotion expressed on the basis of such region. As it was expected, emotions that are related mainly to the mouth region (e.g. disgust) are not recognized at all and positive emotions are the ones that are better determined by considering only the region of the eyes.
{"title":"Automatic Emotion Recognition from Facial Expressions when Wearing a Mask","authors":"G. Castellano, B. D. Carolis, Nicola Macchiarulo","doi":"10.1145/3464385.3464730","DOIUrl":"https://doi.org/10.1145/3464385.3464730","url":null,"abstract":"People communicate emotions through several nonverbal channels and facial expressions play an important part in this communicative process. Automatic Facial Expression Recognition (FER) is a very hot topic that has attracted a lot of interest in the last years. Most FER systems try to recognize emotions from the entire face of a person. Unfortunately, due to pandemic situation, people wear a mask most of the time, thus their faces are not fully visible. In our study, we investigate the effectiveness of a FER system in recognizing emotions only from the eyes region, which is the sole visible region when wearing a mask by comparing the results of the same approach when applied to the entire face. The proposed pipeline involves several steps: detecting a face in an image, detecting a mask on a face, extracting the eyes region, and recognize the emotion expressed on the basis of such region. As it was expected, emotions that are related mainly to the mouth region (e.g. disgust) are not recognized at all and positive emotions are the ones that are better determined by considering only the region of the eyes.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131847447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María Menéndez-Blanco, S. U. Yavuz, Jennifer Schubert
The first edition of the Interactive Experiences (IEs) at CHItaly welcomed prototypes and installations that explored, represented, and challenged the boundaries of technology-mediated interactions. We received a total of 15 high-quality submissions and we accepted 11 of them. The submissions were peer-reviewed by members of the committee, which provided insightful comments and suggestions to the authors. The selected works addressed topics such as machine learning in user profiling, and technology obsolescence in creative, playful, and provocative ways. All these works will be part of the IEs exhibition that will take place during the main conference.
{"title":"Interactive Experiences","authors":"María Menéndez-Blanco, S. U. Yavuz, Jennifer Schubert","doi":"10.1145/3464385.3466877","DOIUrl":"https://doi.org/10.1145/3464385.3466877","url":null,"abstract":"The first edition of the Interactive Experiences (IEs) at CHItaly welcomed prototypes and installations that explored, represented, and challenged the boundaries of technology-mediated interactions. We received a total of 15 high-quality submissions and we accepted 11 of them. The submissions were peer-reviewed by members of the committee, which provided insightful comments and suggestions to the authors. The selected works addressed topics such as machine learning in user profiling, and technology obsolescence in creative, playful, and provocative ways. All these works will be part of the IEs exhibition that will take place during the main conference.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129240504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The automated grading of assignments is a long discussed topic in the field of technology-enhanced learning. In such a large research area, the authors focused on the automated grading of assignments made up of a mix of commands (in R language), their output and comments (in natural language). In particular, the paper discusses several improvements on the automated feedback generated by a tool developed at the University of L’Aquila, to support the students during their study of the subject. The goals of the research are the implementation of a feedback that gives an explanation of the automated grading, also providing students with the causes of the mistakes and suggestions on how to correct them. Accordingly, we designed and developed an automated feedback, used by students during the current academic year to support their homework. We then collected the students’ opinions through both standardised and ad-hoc questionnaires, so to evaluate the effectiveness of our proposal and identify the aspects to improve. The results highlight an increased engagement while performing the assessment, the usefulness of the feedback, as well as where the explanation was clear and where improvements are needed.
{"title":"Automated Feedback to Students in Data Science Assignments: Improved Implementation and Results","authors":"Alessandra Galassi, P. Vittorini","doi":"10.1145/3464385.3464387","DOIUrl":"https://doi.org/10.1145/3464385.3464387","url":null,"abstract":"The automated grading of assignments is a long discussed topic in the field of technology-enhanced learning. In such a large research area, the authors focused on the automated grading of assignments made up of a mix of commands (in R language), their output and comments (in natural language). In particular, the paper discusses several improvements on the automated feedback generated by a tool developed at the University of L’Aquila, to support the students during their study of the subject. The goals of the research are the implementation of a feedback that gives an explanation of the automated grading, also providing students with the causes of the mistakes and suggestions on how to correct them. Accordingly, we designed and developed an automated feedback, used by students during the current academic year to support their homework. We then collected the students’ opinions through both standardised and ad-hoc questionnaires, so to evaluate the effectiveness of our proposal and identify the aspects to improve. The results highlight an increased engagement while performing the assessment, the usefulness of the feedback, as well as where the explanation was clear and where improvements are needed.","PeriodicalId":221731,"journal":{"name":"CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115590965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}