It is not sufficient to simply express the goal of improved digital access for people with disabilities, without properly monitoring the progress towards those goals. This keynote speech describes the Digital Accessibility Rights Evaluation (DARE) index, which is coordinated by the Global Initiative for Inclusive ICTs (G3ICT). The DARE index is a benchmarking tool, for disability advocates, governments, civil society, international organizations and policy makers to trace country progress in making Information and Communication Technologies (ICT) accessible for all, in compliance with Article 9 of the Convention on the Rights of Persons with Disabilities (CRPD). The DARE Index measures three categories of variables in each country: country commitments (legal, regulatory, policies and programs), country capacity to implement (organization, processes, resources) and actual digital accessibility outcomes for persons with disabilities in 10 areas of products and services. Data is collected in close cooperation with Disabled People's International (DPI) and persons with disabilities worldwide, considering their best position to assess and report on digital accessibility matters in their respective countries. This keynote speech for ASSETS 2021 describes the DARE index and the most recent data collected in the DARE index in 2020, and highlights how the DARE index can be used to support digital accessibility research.
{"title":"The DARE Index - Monitoring the Progress of Digital Accessibility around the World - A Research Conducted by Advocates for Advocates","authors":"Axel Leblois","doi":"10.1145/3441852.3487959","DOIUrl":"https://doi.org/10.1145/3441852.3487959","url":null,"abstract":"It is not sufficient to simply express the goal of improved digital access for people with disabilities, without properly monitoring the progress towards those goals. This keynote speech describes the Digital Accessibility Rights Evaluation (DARE) index, which is coordinated by the Global Initiative for Inclusive ICTs (G3ICT). The DARE index is a benchmarking tool, for disability advocates, governments, civil society, international organizations and policy makers to trace country progress in making Information and Communication Technologies (ICT) accessible for all, in compliance with Article 9 of the Convention on the Rights of Persons with Disabilities (CRPD). The DARE Index measures three categories of variables in each country: country commitments (legal, regulatory, policies and programs), country capacity to implement (organization, processes, resources) and actual digital accessibility outcomes for persons with disabilities in 10 areas of products and services. Data is collected in close cooperation with Disabled People's International (DPI) and persons with disabilities worldwide, considering their best position to assess and report on digital accessibility matters in their respective countries. This keynote speech for ASSETS 2021 describes the DARE index and the most recent data collected in the DARE index in 2020, and highlights how the DARE index can be used to support digital accessibility research.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122403985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Nearmi, a framework that enables designers to create customizable and accessible point-of-interest (POI) techniques in virtual reality (VR) for people with limited mobility. Designers can use Nearmi by creating and combining instances of its four components—representation, display, selection, and transition. These components enable users to gain awareness of POIs in virtual environments, and automatically re-orient the virtual camera toward a selected POI. We conducted a video elicitation study where 17 participants with limited mobility provided feedback on different Nearmi implementations. Although participants generally weighed the same design considerations when discussing their preferences, their choices reflected tradeoffs in accessibility, realism, spatial awareness, comfort, and familiarity with the interaction. Our findings highlight the need for accessible and customizable VR interaction techniques, as well as design considerations for building and evaluating these techniques.
{"title":"Nearmi: A Framework for Designing Point of Interest Techniques for VR Users with Limited Mobility","authors":"Rachel L. Franz, Sasa Junuzovic, Martez E. Mott","doi":"10.1145/3441852.3471230","DOIUrl":"https://doi.org/10.1145/3441852.3471230","url":null,"abstract":"We propose Nearmi, a framework that enables designers to create customizable and accessible point-of-interest (POI) techniques in virtual reality (VR) for people with limited mobility. Designers can use Nearmi by creating and combining instances of its four components—representation, display, selection, and transition. These components enable users to gain awareness of POIs in virtual environments, and automatically re-orient the virtual camera toward a selected POI. We conducted a video elicitation study where 17 participants with limited mobility provided feedback on different Nearmi implementations. Although participants generally weighed the same design considerations when discussing their preferences, their choices reflected tradeoffs in accessibility, realism, spatial awareness, comfort, and familiarity with the interaction. Our findings highlight the need for accessible and customizable VR interaction techniques, as well as design considerations for building and evaluating these techniques.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131825002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joy Ming, Sharon Heung, Shiri Azenkot, Aditya Vashistha
Response bias has been framed as the tendency of a participant's response to be skewed by a variety of factors, including study design and participant-researcher dynamics. Response bias is a concern for all researchers who conduct studies with people — especially those working with participants with disabilities. This is because these participants’ diverse needs require methodological adjustments and differences in disability identity between the researcher and participant influence power dynamics. Despite its relevance, there is little literature that connects response bias to accessibility. We conducted semi-structured interviews with 27 accessibility researchers on how response bias manifested in their research and how they mitigated it. We present unique instances of response bias and how it is handled in accessibility research; insights into how response bias interacts with other biases like researcher or sampling bias; and philosophies and tensions around response bias such as whether to accept or address it. We conclude with guidelines on thinking about response bias in accessibility research.
{"title":"Accept or Address? Researchers’ Perspectives on Response Bias in Accessibility Research","authors":"Joy Ming, Sharon Heung, Shiri Azenkot, Aditya Vashistha","doi":"10.1145/3441852.3471216","DOIUrl":"https://doi.org/10.1145/3441852.3471216","url":null,"abstract":"Response bias has been framed as the tendency of a participant's response to be skewed by a variety of factors, including study design and participant-researcher dynamics. Response bias is a concern for all researchers who conduct studies with people — especially those working with participants with disabilities. This is because these participants’ diverse needs require methodological adjustments and differences in disability identity between the researcher and participant influence power dynamics. Despite its relevance, there is little literature that connects response bias to accessibility. We conducted semi-structured interviews with 27 accessibility researchers on how response bias manifested in their research and how they mitigated it. We present unique instances of response bias and how it is handled in accessibility research; insights into how response bias interacts with other biases like researcher or sampling bias; and philosophies and tensions around response bias such as whether to accept or address it. We conclude with guidelines on thinking about response bias in accessibility research.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128763846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carmen Yip, Jie Mi Chong, sin yee kwek, Yong Wang, Kotaro Hara
Presentation slides are widely used in occasions such as academic talks and business meetings. Captions placed on slides support deaf and hard of hearing (DHH) people to understand spoken contents, but simultaneously comprehending and associating visual contents on slides and caption text could be challenging. In this paper, we design and develop a visualization technique to highlight and associate chart on a slide and numerical data in caption. We first conduct a small formative study with people with and without hearing impairments to assess the value of the visualization technique using a lo-fidelity video prototype. We then develop Visionary Caption, a visualization technique that uses natural language processing to automatically highlight visual content and numerical phrases, and show the association between them. We present a scenario and personas to showcase the potential utility of Visionary Caption and guide its future development.
{"title":"Visionary Caption: Improving the Accessibility of Presentation Slides Through Highlighting Visualization","authors":"Carmen Yip, Jie Mi Chong, sin yee kwek, Yong Wang, Kotaro Hara","doi":"10.1145/3441852.3476539","DOIUrl":"https://doi.org/10.1145/3441852.3476539","url":null,"abstract":"Presentation slides are widely used in occasions such as academic talks and business meetings. Captions placed on slides support deaf and hard of hearing (DHH) people to understand spoken contents, but simultaneously comprehending and associating visual contents on slides and caption text could be challenging. In this paper, we design and develop a visualization technique to highlight and associate chart on a slide and numerical data in caption. We first conduct a small formative study with people with and without hearing impairments to assess the value of the visualization technique using a lo-fidelity video prototype. We then develop Visionary Caption, a visualization technique that uses natural language processing to automatically highlight visual content and numerical phrases, and show the association between them. We present a scenario and personas to showcase the potential utility of Visionary Caption and guide its future development.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131354308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonçalo Ferreira Lobo, David Gonçalves, Pedro Pais, Tiago Guerreiro, André Rodrigues
Smartphones are shipped with built-in screen readers and other accessibility features that enable blind people to autonomously learn and interact with the device. However, the process is not seamless, and many face difficulties in the adoption and path to becoming more experienced users. In the past, games like Minesweeper served to introduce and train people in the use of the mouse, from its left and right click to precision pointing required to play the game. Smartphone gestures (and particularly screen reader gestures) pose similar challenges to the ones first faced by mouse users. In this work, we explore the use of games to inconspicuously train gestures. We designed and developed a set of accessible games, enabling users to practice smartphone gestures. We evaluated the games with 8 blind users and conducted remote interviews. Our results show how purposeful accessible games could be important in the process of training and discovering smartphone gestures, as they offer a playful method of learning. This, in turn, increases autonomy and inclusion, as this process becomes easier and more engaging.
{"title":"Using Games to Practice Screen Reader Gestures","authors":"Gonçalo Ferreira Lobo, David Gonçalves, Pedro Pais, Tiago Guerreiro, André Rodrigues","doi":"10.1145/3441852.3476556","DOIUrl":"https://doi.org/10.1145/3441852.3476556","url":null,"abstract":"Smartphones are shipped with built-in screen readers and other accessibility features that enable blind people to autonomously learn and interact with the device. However, the process is not seamless, and many face difficulties in the adoption and path to becoming more experienced users. In the past, games like Minesweeper served to introduce and train people in the use of the mouse, from its left and right click to precision pointing required to play the game. Smartphone gestures (and particularly screen reader gestures) pose similar challenges to the ones first faced by mouse users. In this work, we explore the use of games to inconspicuously train gestures. We designed and developed a set of accessible games, enabling users to practice smartphone gestures. We evaluated the games with 8 blind users and conducted remote interviews. Our results show how purposeful accessible games could be important in the process of training and discovering smartphone gestures, as they offer a playful method of learning. This, in turn, increases autonomy and inclusion, as this process becomes easier and more engaging.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131421344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our experiences supporting web accessibility planning among a group of older adult online content creators. We highlight challenges we encountered meeting the web accessibility informational needs of our partners and helping this group of creators become aware and put in place measures to address accessibility issues. Our reflections highlight opportunities for future efforts to improve web accessibility support for everyday content creators and support for helping those less familiar with web accessibility options.
{"title":"A Case for Making Web Accessibility Guidelines Accessible: Older Adult Content Creators and Web Accessibility Planning","authors":"Aqueasha Martin-Hammond, Ulka Patil, Barsa Tandukar","doi":"10.1145/3441852.3476472","DOIUrl":"https://doi.org/10.1145/3441852.3476472","url":null,"abstract":"This paper presents our experiences supporting web accessibility planning among a group of older adult online content creators. We highlight challenges we encountered meeting the web accessibility informational needs of our partners and helping this group of creators become aware and put in place measures to address accessibility issues. Our reflections highlight opportunities for future efforts to improve web accessibility support for everyday content creators and support for helping those less familiar with web accessibility options.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"700 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133322699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudia Chen, R. Wu, Hashim Khan, K. Truong, Fanny Chevalier
People with chronic obstructive pulmonary disease (COPD) experience dyspnea and dyspnea-related distress and anxiety (DDA) upon physical exertion. When performing supervised exercise, measurement of physiological data, such as heart rate (HR) and blood oxygen saturation (O2sat), are commonly used for safety, but the impacts of such monitoring on their perceptions and behaviour have not previously been studied. This paper investigates the effect of presenting live physiological data to people with COPD during exercise with a focus on its impact on perceptions of dyspnea intensity (DI) and DDA. Informed by formative interviews with 15 people with COPD, we design VIDDE, an exercise companion tool visualizing live data from a pulse oximeter, and evaluate its effect on DI and DDA through case studies involving 3 participants with COPD exercising at their homes. We also conducted design probe interviews with 6 more participants with COPD to investigate their needs and design requirements for an exercise-companion application that featured physiological data monitoring. Our results suggest that presenting live physiological data during exercise is of value, and can contribute to reduced DDA, better understanding of breathlessness sensations, while providing sufficient reassurance to encourage physical activity.
{"title":"VIDDE: Visualizations for Helping People with COPD Interpret Dyspnea During Exercise","authors":"Claudia Chen, R. Wu, Hashim Khan, K. Truong, Fanny Chevalier","doi":"10.1145/3441852.3471204","DOIUrl":"https://doi.org/10.1145/3441852.3471204","url":null,"abstract":"People with chronic obstructive pulmonary disease (COPD) experience dyspnea and dyspnea-related distress and anxiety (DDA) upon physical exertion. When performing supervised exercise, measurement of physiological data, such as heart rate (HR) and blood oxygen saturation (O2sat), are commonly used for safety, but the impacts of such monitoring on their perceptions and behaviour have not previously been studied. This paper investigates the effect of presenting live physiological data to people with COPD during exercise with a focus on its impact on perceptions of dyspnea intensity (DI) and DDA. Informed by formative interviews with 15 people with COPD, we design VIDDE, an exercise companion tool visualizing live data from a pulse oximeter, and evaluate its effect on DI and DDA through case studies involving 3 participants with COPD exercising at their homes. We also conducted design probe interviews with 6 more participants with COPD to investigate their needs and design requirements for an exercise-companion application that featured physiological data monitoring. Our results suggest that presenting live physiological data during exercise is of value, and can contribute to reduced DDA, better understanding of breathlessness sensations, while providing sufficient reassurance to encourage physical activity.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiffany Thang, Alice Liang, Yechan Choi, Adrian Parrales, Sara H. Kuang, S. Kurniawan, Heather A. Perez
Due to the COVID-19 pandemic, essential services and support for individuals with ASD have had to transition to telehealth and virtual technologies. While these technologies have been recommended for use to continue provision of essential services to this population, it has yet to be understood what impact it has had on essential service providers and adults with ASD. This experience report provides insight from essential service providers and adults with ASD from a community center for adults with developmental disabilities to understand their experiences in providing and accessing mental health, and community and vocational support during the COVID-19 pandemic through telehealth and virtual technologies.
{"title":"Providing and Accessing Support During the COVID-19 Pandemic: Experiences of Mental Health Professionals, Community and Vocational Support Providers, and Adults with ASD","authors":"Tiffany Thang, Alice Liang, Yechan Choi, Adrian Parrales, Sara H. Kuang, S. Kurniawan, Heather A. Perez","doi":"10.1145/3441852.3476470","DOIUrl":"https://doi.org/10.1145/3441852.3476470","url":null,"abstract":"Due to the COVID-19 pandemic, essential services and support for individuals with ASD have had to transition to telehealth and virtual technologies. While these technologies have been recommended for use to continue provision of essential services to this population, it has yet to be understood what impact it has had on essential service providers and adults with ASD. This experience report provides insight from essential service providers and adults with ASD from a community center for adults with developmental disabilities to understand their experiences in providing and accessing mental health, and community and vocational support during the COVID-19 pandemic through telehealth and virtual technologies.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115323850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica G. J. Vuijk, James Gay, M. Plaisier, A. Kappers, A. Theil
Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.
{"title":"PatRec: A Mobile Game for Learning Social Haptic Communication","authors":"Jessica G. J. Vuijk, James Gay, M. Plaisier, A. Kappers, A. Theil","doi":"10.1145/3441852.3476563","DOIUrl":"https://doi.org/10.1145/3441852.3476563","url":null,"abstract":"Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115754756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dayanlee De León Cordero, Christopher Ayala, Patricia Ordóñez
Most computer programs are designed in a way that requires interaction based on finger movements and hand gestures. This type of environment, which assumes the dexterity of human hands, presents limitations for those with motor disabilities. This limitation excludes this population from learning to code and, for those who develop a musculoskeletal disorder in later stages, could jeopardize their programming careers. The objective of this research is to design a Voice User Interface (or VUI for its acronym in English) that allows the use of the user’s voice to program in an Integrated Development Environment (or IDE for its acronym in English). For this, a basic programming structure was defined using Alexa Skills, which allows the user to declare variables, print values, solve basic mathematical expressions, insert conditional expressions, and create loops. An online text editor was created using CodeMirror to run user input using the Python programming language. However, the results could not yet be evaluated since the application does not have a compiler integrated. In future work it is desired to add the compiler, and thus to be able to execute the user’s program in the online editor. The aim is to also add the ability to edit, debug and move the cursor using the Alexa Skill.
{"title":"Kavita Project: Voice Programming for People with Motor Disabilities","authors":"Dayanlee De León Cordero, Christopher Ayala, Patricia Ordóñez","doi":"10.1145/3441852.3476516","DOIUrl":"https://doi.org/10.1145/3441852.3476516","url":null,"abstract":"Most computer programs are designed in a way that requires interaction based on finger movements and hand gestures. This type of environment, which assumes the dexterity of human hands, presents limitations for those with motor disabilities. This limitation excludes this population from learning to code and, for those who develop a musculoskeletal disorder in later stages, could jeopardize their programming careers. The objective of this research is to design a Voice User Interface (or VUI for its acronym in English) that allows the use of the user’s voice to program in an Integrated Development Environment (or IDE for its acronym in English). For this, a basic programming structure was defined using Alexa Skills, which allows the user to declare variables, print values, solve basic mathematical expressions, insert conditional expressions, and create loops. An online text editor was created using CodeMirror to run user input using the Python programming language. However, the results could not yet be evaluated since the application does not have a compiler integrated. In future work it is desired to add the compiler, and thus to be able to execute the user’s program in the online editor. The aim is to also add the ability to edit, debug and move the cursor using the Alexa Skill.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124029757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}