{"title":"An audio MOOC framework for the digital inclusion of low literate people in the distance education process","authors":"Raj Kishen Moloo, Kavi Kumar Khedo, Tadinada Venkata Prabhakar","doi":"10.1007/s10209-023-01051-5","DOIUrl":"https://doi.org/10.1007/s10209-023-01051-5","url":null,"abstract":"","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"12 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135268370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-20DOI: 10.1007/s10209-023-01038-2
Sonia Torras, Anna Vilaregut, Xavier Canaleta, Eduard Martí
Abstract Purpose Mental health professionals undergo continuous training throughout their careers. Their training consists in part of the supervision of cases by an entire healthcare team, a practice that allows them to consolidate their understanding of behaviour, emotions and to enhance their relationships with patients and their families. The COVID-19 pandemic has had a great impact on this training methodology, leading to a significant increase in the use of digital platforms, but such digital tools are not well adapted to this context, especially when it comes to the supervision of real online cases. The goals of this study are: (1) to analyse what professionals need in order to carry out online interventions and training through the live supervision of real online cases and (2) to create a prototype of a specific digital platform intended to help meet the detected needs. Methods 28 semi-structured interviews were conducted with supervisors ( N = 14) and professionals in training ( N = 14). Results The results have allowed us to gain a deeper understanding of the difficulties and benefits that professionals are encountering when doing online live supervision using the existing video conference platforms. Conclusion This analysis points to a need to create a platform that can overcome the difficulties and enhance the benefits of the digitalization of family intervention training through the live supervision of real cases. These specific needs have yet to be addressed by existing digital platforms.
{"title":"Needs analysis for the design of a digital platform to train professionals in online family intervention through live supervision of real cases","authors":"Sonia Torras, Anna Vilaregut, Xavier Canaleta, Eduard Martí","doi":"10.1007/s10209-023-01038-2","DOIUrl":"https://doi.org/10.1007/s10209-023-01038-2","url":null,"abstract":"Abstract Purpose Mental health professionals undergo continuous training throughout their careers. Their training consists in part of the supervision of cases by an entire healthcare team, a practice that allows them to consolidate their understanding of behaviour, emotions and to enhance their relationships with patients and their families. The COVID-19 pandemic has had a great impact on this training methodology, leading to a significant increase in the use of digital platforms, but such digital tools are not well adapted to this context, especially when it comes to the supervision of real online cases. The goals of this study are: (1) to analyse what professionals need in order to carry out online interventions and training through the live supervision of real online cases and (2) to create a prototype of a specific digital platform intended to help meet the detected needs. Methods 28 semi-structured interviews were conducted with supervisors ( N = 14) and professionals in training ( N = 14). Results The results have allowed us to gain a deeper understanding of the difficulties and benefits that professionals are encountering when doing online live supervision using the existing video conference platforms. Conclusion This analysis points to a need to create a platform that can overcome the difficulties and enhance the benefits of the digitalization of family intervention training through the live supervision of real cases. These specific needs have yet to be addressed by existing digital platforms.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135616647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1007/s10209-023-01048-0
Eduardo Nacimiento-García, Carina S. González-González, Francisco L. Gutiérrez-Vela
Abstract In recent years, the use of video call or video conference tools has not stopped increasing, and especially due to the COVID-19 pandemic, the use of video calls increased in the educational and work spheres, but also in the family sphere, due to the risks of contagion in face-to-face meetings. Throughout the world, many older people are affected by hearing loss. Auditory functional diversity can make it difficult to enjoy video calls. Using automatic captions might help these people, but not all video calling tools offer this functionality, and some offer it in some languages. We developed an automatic conversation captioning tool using Automatic Speech Recognition and Speech to Text, using the free software tool Coqui STT. This automatic captioning tool is independent of the video call platform used and allows older adults or anyone with auditory functional diversity to enjoy video calls in a simple way. A transparent user interface was designed for our tool that overlays the video call window, and the tool allows us to easily change the text size, color, and background settings. It is also important to remember that many older people have visual functional diversity, so they could have problems reading the texts, thus it is important that each person can adapt the text to their needs. An analysis has been carried out that includes older people to analyze the benefits of the interface, as well as some configuration preferences, and a proposal to improve the way the text is displayed on the screen. Spanish and English were tested during the investigation, but the tool allows us to easily install dozens of new languages based on models trained for Coqui STT.
{"title":"Automatic captions on video calls: a must for the older adults","authors":"Eduardo Nacimiento-García, Carina S. González-González, Francisco L. Gutiérrez-Vela","doi":"10.1007/s10209-023-01048-0","DOIUrl":"https://doi.org/10.1007/s10209-023-01048-0","url":null,"abstract":"Abstract In recent years, the use of video call or video conference tools has not stopped increasing, and especially due to the COVID-19 pandemic, the use of video calls increased in the educational and work spheres, but also in the family sphere, due to the risks of contagion in face-to-face meetings. Throughout the world, many older people are affected by hearing loss. Auditory functional diversity can make it difficult to enjoy video calls. Using automatic captions might help these people, but not all video calling tools offer this functionality, and some offer it in some languages. We developed an automatic conversation captioning tool using Automatic Speech Recognition and Speech to Text, using the free software tool Coqui STT. This automatic captioning tool is independent of the video call platform used and allows older adults or anyone with auditory functional diversity to enjoy video calls in a simple way. A transparent user interface was designed for our tool that overlays the video call window, and the tool allows us to easily change the text size, color, and background settings. It is also important to remember that many older people have visual functional diversity, so they could have problems reading the texts, thus it is important that each person can adapt the text to their needs. An analysis has been carried out that includes older people to analyze the benefits of the interface, as well as some configuration preferences, and a proposal to improve the way the text is displayed on the screen. Spanish and English were tested during the investigation, but the tool allows us to easily install dozens of new languages based on models trained for Coqui STT.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135858516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-12DOI: 10.1007/s10209-023-01049-z
Åsa Revenäs, Lars Ström, Antonio Cicchetti, Maria Ehn
Abstract Older adults are a heterogeneous population for which many e-health innovations are inaccessible. Involving older adults in user-centered design (UCD) with a specific focus on inclusive design is important to make e-health more accessible to this user group. This case study aimed to explore the feasibility of a new UCD approach aiming to minimize bias in the design phase of a digital support for older adults’ physical activity (PA). The study used mixed methods and applied UCD principles in a four-iteration design phase followed by an evaluation phase where 11 and 15 older adults participated, respectively. The users’ gender, PA level and technology experience (TE) were considered in recruitment, data analysis and prioritization of improvement efforts. In the design phase, users with different gender, PA level and TE participated and contributed with feedback, which was prioritized in the development. The adaptation included improving readability, simplifying layout and features, clarifying structure, and making the digital content more inclusive and relevant. The evaluation showed that the users had a positive experience of the prototype and could use it with some help. The study demonstrated that adopting e-health to assure digital inclusion among older adults must address several aspects. The UCD approach was feasible for amending user bias and for confirming that users of both genders and with varied PA- and TE level shaped the design. However, evaluation of the method with larger samples is needed. Moreover, further research on methods to involve digitally excluded populations in UCD is needed.
{"title":"Toward digital inclusion of older adults in e-health: a case study on support for physical activity","authors":"Åsa Revenäs, Lars Ström, Antonio Cicchetti, Maria Ehn","doi":"10.1007/s10209-023-01049-z","DOIUrl":"https://doi.org/10.1007/s10209-023-01049-z","url":null,"abstract":"Abstract Older adults are a heterogeneous population for which many e-health innovations are inaccessible. Involving older adults in user-centered design (UCD) with a specific focus on inclusive design is important to make e-health more accessible to this user group. This case study aimed to explore the feasibility of a new UCD approach aiming to minimize bias in the design phase of a digital support for older adults’ physical activity (PA). The study used mixed methods and applied UCD principles in a four-iteration design phase followed by an evaluation phase where 11 and 15 older adults participated, respectively. The users’ gender, PA level and technology experience (TE) were considered in recruitment, data analysis and prioritization of improvement efforts. In the design phase, users with different gender, PA level and TE participated and contributed with feedback, which was prioritized in the development. The adaptation included improving readability, simplifying layout and features, clarifying structure, and making the digital content more inclusive and relevant. The evaluation showed that the users had a positive experience of the prototype and could use it with some help. The study demonstrated that adopting e-health to assure digital inclusion among older adults must address several aspects. The UCD approach was feasible for amending user bias and for confirming that users of both genders and with varied PA- and TE level shaped the design. However, evaluation of the method with larger samples is needed. Moreover, further research on methods to involve digitally excluded populations in UCD is needed.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135969172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-08DOI: 10.1007/s10209-023-01045-3
Kim Starr, Sabine Braun
Abstract There is broad consensus that audio description (AD) is a modality of intersemiotic translation, but there are different views in relation to how AD can be more precisely conceptualised. While Benecke (Audiodeskription als partielle Translation. Modell und Methode, LIT, Berlin, 2014) characterises AD as ‘partial translation’, Braun (T 28: 302–313, 2016) hypothesises that what audio describers appear to ‘omit’ from their descriptions can normally be inferred by the audience, drawing on narrative cues from dialogue, mise-en-scène, kinesis, music or sound effects. The study reported in this paper tested this hypothesis using a corpus of material created during the H2020 MeMAD project. The MeMAD project aimed to improve access to audiovisual (AV) content through a combination of human and computer-based methods of description. One of the MeMAD workstreams addressed human approaches to describing visually salient cues. This included an analysis of the potential impact of omissions in AD, which is the focus of this paper. Using a corpus of approximately 500 audio described film extracts we identified the visual elements that can be considered essential for the construction of the filmic narrative and then performed a qualitative analysis of the corresponding audio descriptions to determine how these elements are verbally represented and whether any omitted elements could be inferred from other cues that are accessible to visually impaired audiences. We then identified the most likely source of these inferences and the conditions upon which retrieval could be predicated, preparing the ground for future reception studies to test our hypotheses with target audiences. In this paper, we discuss the methodology used to determine where omissions occur in the analysed audio descriptions, consider worked examples from the MeMAD500 film corpus, and outline the findings of our study namely that various strategies are relevant to inferring omitted information, including the use of proximal and distal contextual cues, and reliance on the application of common knowledge and iconic scenarios. To conclude, consideration is given to overcoming significant omissions in human-generated AD, such as using extended AD formats, and mitigating similar gaps in machine-generated descriptions, where incorporating dialogue analysis and other supplementary data into the computer model could resolve many omissions.
{"title":"Omissions and inferential meaning-making in audio description, and implications for automating video content description","authors":"Kim Starr, Sabine Braun","doi":"10.1007/s10209-023-01045-3","DOIUrl":"https://doi.org/10.1007/s10209-023-01045-3","url":null,"abstract":"Abstract There is broad consensus that audio description (AD) is a modality of intersemiotic translation, but there are different views in relation to how AD can be more precisely conceptualised. While Benecke (Audiodeskription als partielle Translation. Modell und Methode, LIT, Berlin, 2014) characterises AD as ‘partial translation’, Braun (T 28: 302–313, 2016) hypothesises that what audio describers appear to ‘omit’ from their descriptions can normally be inferred by the audience, drawing on narrative cues from dialogue, mise-en-scène, kinesis, music or sound effects. The study reported in this paper tested this hypothesis using a corpus of material created during the H2020 MeMAD project. The MeMAD project aimed to improve access to audiovisual (AV) content through a combination of human and computer-based methods of description. One of the MeMAD workstreams addressed human approaches to describing visually salient cues. This included an analysis of the potential impact of omissions in AD, which is the focus of this paper. Using a corpus of approximately 500 audio described film extracts we identified the visual elements that can be considered essential for the construction of the filmic narrative and then performed a qualitative analysis of the corresponding audio descriptions to determine how these elements are verbally represented and whether any omitted elements could be inferred from other cues that are accessible to visually impaired audiences. We then identified the most likely source of these inferences and the conditions upon which retrieval could be predicated, preparing the ground for future reception studies to test our hypotheses with target audiences. In this paper, we discuss the methodology used to determine where omissions occur in the analysed audio descriptions, consider worked examples from the MeMAD500 film corpus, and outline the findings of our study namely that various strategies are relevant to inferring omitted information, including the use of proximal and distal contextual cues, and reliance on the application of common knowledge and iconic scenarios. To conclude, consideration is given to overcoming significant omissions in human-generated AD, such as using extended AD formats, and mitigating similar gaps in machine-generated descriptions, where incorporating dialogue analysis and other supplementary data into the computer model could resolve many omissions.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135198849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-06DOI: 10.1007/s10209-023-01046-2
Hansuk Um, Hisam Kim, Dain Choi, Hyungna Oh
Abstract This study examines whether the use of AI-Pengtalk, an AI-based Conversational English programme, provided by a broadcasting company (EBS) that specializes in public education can significantly improve conversational English skills and bridge the English language proficiency gap associated with parental socioeconomic status. Over the course of four weeks from April 27 to May 22 in 2020, 108 fourth-grade classes in 54 elementary schools voluntarily participated in this experiment. Two classes in each school were designated as a treatment group and a control group. For the treatment group, a tablet installed with a pilot version of AI-Pengtalk was provided and students were encouraged to make use of the programme. Two sets of surveys and English tests were placed pre and post hoc. After 4 weeks, test scores, log files, and survey responses of participants were analysed. A series of DID analyses demonstrate that the use of AI-Pengtalk improves the treatment group’s self-evaluation of their English abilities, confidence in using English, preference on English itself, and amount of time spent on studying English during the pilot experimental period compared to the control group. When other variables were controlled, the use of AI-Pengtalk also helped the treatment group achieve higher test scores. This study implicates that the use of smart English education like AI-Pengtalk may especially be able to better compensate for academic setbacks caused by low parental SES or, in the case of English learning, the reluctance to converse in English with other students.
{"title":"An AI-based English education platform during the COVID-19 pandemic","authors":"Hansuk Um, Hisam Kim, Dain Choi, Hyungna Oh","doi":"10.1007/s10209-023-01046-2","DOIUrl":"https://doi.org/10.1007/s10209-023-01046-2","url":null,"abstract":"Abstract This study examines whether the use of AI-Pengtalk, an AI-based Conversational English programme, provided by a broadcasting company (EBS) that specializes in public education can significantly improve conversational English skills and bridge the English language proficiency gap associated with parental socioeconomic status. Over the course of four weeks from April 27 to May 22 in 2020, 108 fourth-grade classes in 54 elementary schools voluntarily participated in this experiment. Two classes in each school were designated as a treatment group and a control group. For the treatment group, a tablet installed with a pilot version of AI-Pengtalk was provided and students were encouraged to make use of the programme. Two sets of surveys and English tests were placed pre and post hoc. After 4 weeks, test scores, log files, and survey responses of participants were analysed. A series of DID analyses demonstrate that the use of AI-Pengtalk improves the treatment group’s self-evaluation of their English abilities, confidence in using English, preference on English itself, and amount of time spent on studying English during the pilot experimental period compared to the control group. When other variables were controlled, the use of AI-Pengtalk also helped the treatment group achieve higher test scores. This study implicates that the use of smart English education like AI-Pengtalk may especially be able to better compensate for academic setbacks caused by low parental SES or, in the case of English learning, the reluctance to converse in English with other students.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135350358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-29DOI: 10.1007/s10209-023-01042-6
Lyke Esselink, Floris Roelofsen, Jakub Dotlačil, Shani Mende-Gillings, Maartje de Meulder, Nienke Sijm, Anika Smeijers
Abstract Communication between healthcare professionals and deaf patients has been particularly challenging during the COVID-19 pandemic. We have explored the possibility to automatically translate phrases that are frequently used in the diagnosis and treatment of hospital patients, in particular phrases related to COVID-19, from Dutch or English to Dutch Sign Language (NGT). The prototype system we developed displays translations either by means of pre-recorded videos featuring a deaf human signer (for a limited number of sentences) or by means of animations featuring a computer-generated signing avatar (for a larger, though still restricted number of sentences). We evaluated the comprehensibility of the signing avatar, as compared to the human signer. We found that, while individual signs are recognized correctly when signed by the avatar almost as frequently as when signed by a human, sentence comprehension rates and clarity scores for the avatar are substantially lower than for the human signer. We identify a number of concrete limitations of the JASigning avatar engine that underlies our system. Namely, the engine currently does not offer sufficient control over mouth shapes, the relative speed and intensity of signs in a sentence (prosody), and transitions between signs. These limitations need to be overcome in future work for the engine to become usable in practice.
{"title":"Exploring automatic text-to-sign translation in a healthcare setting","authors":"Lyke Esselink, Floris Roelofsen, Jakub Dotlačil, Shani Mende-Gillings, Maartje de Meulder, Nienke Sijm, Anika Smeijers","doi":"10.1007/s10209-023-01042-6","DOIUrl":"https://doi.org/10.1007/s10209-023-01042-6","url":null,"abstract":"Abstract Communication between healthcare professionals and deaf patients has been particularly challenging during the COVID-19 pandemic. We have explored the possibility to automatically translate phrases that are frequently used in the diagnosis and treatment of hospital patients, in particular phrases related to COVID-19, from Dutch or English to Dutch Sign Language (NGT). The prototype system we developed displays translations either by means of pre-recorded videos featuring a deaf human signer (for a limited number of sentences) or by means of animations featuring a computer-generated signing avatar (for a larger, though still restricted number of sentences). We evaluated the comprehensibility of the signing avatar, as compared to the human signer. We found that, while individual signs are recognized correctly when signed by the avatar almost as frequently as when signed by a human, sentence comprehension rates and clarity scores for the avatar are substantially lower than for the human signer. We identify a number of concrete limitations of the JASigning avatar engine that underlies our system. Namely, the engine currently does not offer sufficient control over mouth shapes, the relative speed and intensity of signs in a sentence (prosody), and transitions between signs. These limitations need to be overcome in future work for the engine to become usable in practice.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135200116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1007/s10209-023-01043-5
Fernando Durgam, Julián Grigera, Alejandra Garrido
{"title":"Dynamic detection of accessibility smells","authors":"Fernando Durgam, Julián Grigera, Alejandra Garrido","doi":"10.1007/s10209-023-01043-5","DOIUrl":"https://doi.org/10.1007/s10209-023-01043-5","url":null,"abstract":"","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136061696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-21DOI: 10.1007/s10209-023-01044-4
Loukas Katikas, Sofoklis Sotiriou
{"title":"Schools as living labs for the new European bauhaus","authors":"Loukas Katikas, Sofoklis Sotiriou","doi":"10.1007/s10209-023-01044-4","DOIUrl":"https://doi.org/10.1007/s10209-023-01044-4","url":null,"abstract":"","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136235233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-19DOI: 10.1007/s10209-023-01041-7
Eva Cerezo, Carina S. González-González, Clara Bonillo
Abstract There has been a push in recent years to introduce soft skills at different levels of education, and tangible technologies are an excellent tool for achieving this. However, integrating digital skills for children with ADHD remains challenging, and educators need effective strategies to promote these skills. Thus, we investigate which methods and frameworks are the most appropriate for children with ADHD when designing technology and promoting creativity and social skills. A pilot experience is also presented in which a team of children with ADHD co-create a game using tangible tabletops. The results show that the strategies used promoted positive behaviors in terms of communication, collaboration, and creativity during the sessions. The contribution of this research is that it provides examples of effective strategies to promote soft skills in children with ADHD.
{"title":"Empowering soft skills in children with ADHD through the co-creation of tangible tabletop games","authors":"Eva Cerezo, Carina S. González-González, Clara Bonillo","doi":"10.1007/s10209-023-01041-7","DOIUrl":"https://doi.org/10.1007/s10209-023-01041-7","url":null,"abstract":"Abstract There has been a push in recent years to introduce soft skills at different levels of education, and tangible technologies are an excellent tool for achieving this. However, integrating digital skills for children with ADHD remains challenging, and educators need effective strategies to promote these skills. Thus, we investigate which methods and frameworks are the most appropriate for children with ADHD when designing technology and promoting creativity and social skills. A pilot experience is also presented in which a team of children with ADHD co-create a game using tangible tabletops. The results show that the strategies used promoted positive behaviors in terms of communication, collaboration, and creativity during the sessions. The contribution of this research is that it provides examples of effective strategies to promote soft skills in children with ADHD.","PeriodicalId":49115,"journal":{"name":"Universal Access in the Information Society","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135014731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}