Rehabilitation gaming—play of digital games that incorporate rehabilitation exercises—is a well-known and broadly applicable way to make physical rehabilitation more fun. It can motivate patients with spinal cord injury to engage in exercises that they find boring and can be as effective as traditional physiotherapy. However, patients’ needs are not only physical. Rehabilitation also needs to help patients overcome the psychological trauma of spinal cord injury. For patients coping with disability, hopelessness, depression, anxiety, or a loss of identity, rehabilitation gaming may provide benefits beyond making exercise more fun. We asked six participants with spinal cord injury to play three cycling-based rehabilitation games to determine how play might change their experiences of rehabilitation. They said that rehabilitation games may be able to help patients to actively participate in their rehabilitation, help them to rediscover who they are, and show them a better future living with spinal cord injury.
{"title":"Beyond Fun: Players’ Experiences of Accessible Rehabilitation Gaming for Spinal Cord Injury","authors":"Gabriele Cimolino, Sussan Askari, Nicholas Graham","doi":"10.1145/3441852.3471227","DOIUrl":"https://doi.org/10.1145/3441852.3471227","url":null,"abstract":"Rehabilitation gaming—play of digital games that incorporate rehabilitation exercises—is a well-known and broadly applicable way to make physical rehabilitation more fun. It can motivate patients with spinal cord injury to engage in exercises that they find boring and can be as effective as traditional physiotherapy. However, patients’ needs are not only physical. Rehabilitation also needs to help patients overcome the psychological trauma of spinal cord injury. For patients coping with disability, hopelessness, depression, anxiety, or a loss of identity, rehabilitation gaming may provide benefits beyond making exercise more fun. We asked six participants with spinal cord injury to play three cycling-based rehabilitation games to determine how play might change their experiences of rehabilitation. They said that rehabilitation games may be able to help patients to actively participate in their rehabilitation, help them to rediscover who they are, and show them a better future living with spinal cord injury.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127251766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sérgio Alves, P. Caldeira, Filipa Ferreira-Brito, L. Carriço, Tiago Guerreiro
Virtual environments for therapy are scarce and lack personalization. The creation of these environments is done by specialists, is time-consuming, and expensive. We present a smartphone tool that allows non-specialists to create navigable virtual environments by taking and linking sequences of panoramic photo spheres, analogly to Google Street View. Editing the environments is then possible in a web platform, myView, where text, images, videos, sounds, and pick-up objects can be added. myView allows users to navigate their environments as well as sharing those environments with others. In a preliminary study with two psychologists, where myView was used as an elicitation probe, the approach was found to be useful for creating meaningful activities for reminiscence and cognitive training. The platform showed to be promising in the democratization of the crafting of virtual environments.
{"title":"myView: End-user Authoring of Virtual Environments for Therapy","authors":"Sérgio Alves, P. Caldeira, Filipa Ferreira-Brito, L. Carriço, Tiago Guerreiro","doi":"10.1145/3441852.3476543","DOIUrl":"https://doi.org/10.1145/3441852.3476543","url":null,"abstract":"Virtual environments for therapy are scarce and lack personalization. The creation of these environments is done by specialists, is time-consuming, and expensive. We present a smartphone tool that allows non-specialists to create navigable virtual environments by taking and linking sequences of panoramic photo spheres, analogly to Google Street View. Editing the environments is then possible in a web platform, myView, where text, images, videos, sounds, and pick-up objects can be added. myView allows users to navigate their environments as well as sharing those environments with others. In a preliminary study with two psychologists, where myView was used as an elicitation probe, the approach was found to be useful for creating meaningful activities for reminiscence and cognitive training. The platform showed to be promising in the democratization of the crafting of virtual environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126424233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Nearmi, a framework that enables designers to create customizable and accessible point-of-interest (POI) techniques in virtual reality (VR) for people with limited mobility. Designers can use Nearmi by creating and combining instances of its four components—representation, display, selection, and transition. These components enable users to gain awareness of POIs in virtual environments, and automatically re-orient the virtual camera toward a selected POI. We conducted a video elicitation study where 17 participants with limited mobility provided feedback on different Nearmi implementations. Although participants generally weighed the same design considerations when discussing their preferences, their choices reflected tradeoffs in accessibility, realism, spatial awareness, comfort, and familiarity with the interaction. Our findings highlight the need for accessible and customizable VR interaction techniques, as well as design considerations for building and evaluating these techniques.
{"title":"Nearmi: A Framework for Designing Point of Interest Techniques for VR Users with Limited Mobility","authors":"Rachel L. Franz, Sasa Junuzovic, Martez E. Mott","doi":"10.1145/3441852.3471230","DOIUrl":"https://doi.org/10.1145/3441852.3471230","url":null,"abstract":"We propose Nearmi, a framework that enables designers to create customizable and accessible point-of-interest (POI) techniques in virtual reality (VR) for people with limited mobility. Designers can use Nearmi by creating and combining instances of its four components—representation, display, selection, and transition. These components enable users to gain awareness of POIs in virtual environments, and automatically re-orient the virtual camera toward a selected POI. We conducted a video elicitation study where 17 participants with limited mobility provided feedback on different Nearmi implementations. Although participants generally weighed the same design considerations when discussing their preferences, their choices reflected tradeoffs in accessibility, realism, spatial awareness, comfort, and familiarity with the interaction. Our findings highlight the need for accessible and customizable VR interaction techniques, as well as design considerations for building and evaluating these techniques.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131825002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carmen Yip, Jie Mi Chong, sin yee kwek, Yong Wang, Kotaro Hara
Presentation slides are widely used in occasions such as academic talks and business meetings. Captions placed on slides support deaf and hard of hearing (DHH) people to understand spoken contents, but simultaneously comprehending and associating visual contents on slides and caption text could be challenging. In this paper, we design and develop a visualization technique to highlight and associate chart on a slide and numerical data in caption. We first conduct a small formative study with people with and without hearing impairments to assess the value of the visualization technique using a lo-fidelity video prototype. We then develop Visionary Caption, a visualization technique that uses natural language processing to automatically highlight visual content and numerical phrases, and show the association between them. We present a scenario and personas to showcase the potential utility of Visionary Caption and guide its future development.
{"title":"Visionary Caption: Improving the Accessibility of Presentation Slides Through Highlighting Visualization","authors":"Carmen Yip, Jie Mi Chong, sin yee kwek, Yong Wang, Kotaro Hara","doi":"10.1145/3441852.3476539","DOIUrl":"https://doi.org/10.1145/3441852.3476539","url":null,"abstract":"Presentation slides are widely used in occasions such as academic talks and business meetings. Captions placed on slides support deaf and hard of hearing (DHH) people to understand spoken contents, but simultaneously comprehending and associating visual contents on slides and caption text could be challenging. In this paper, we design and develop a visualization technique to highlight and associate chart on a slide and numerical data in caption. We first conduct a small formative study with people with and without hearing impairments to assess the value of the visualization technique using a lo-fidelity video prototype. We then develop Visionary Caption, a visualization technique that uses natural language processing to automatically highlight visual content and numerical phrases, and show the association between them. We present a scenario and personas to showcase the potential utility of Visionary Caption and guide its future development.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131354308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gonçalo Ferreira Lobo, David Gonçalves, Pedro Pais, Tiago Guerreiro, André Rodrigues
Smartphones are shipped with built-in screen readers and other accessibility features that enable blind people to autonomously learn and interact with the device. However, the process is not seamless, and many face difficulties in the adoption and path to becoming more experienced users. In the past, games like Minesweeper served to introduce and train people in the use of the mouse, from its left and right click to precision pointing required to play the game. Smartphone gestures (and particularly screen reader gestures) pose similar challenges to the ones first faced by mouse users. In this work, we explore the use of games to inconspicuously train gestures. We designed and developed a set of accessible games, enabling users to practice smartphone gestures. We evaluated the games with 8 blind users and conducted remote interviews. Our results show how purposeful accessible games could be important in the process of training and discovering smartphone gestures, as they offer a playful method of learning. This, in turn, increases autonomy and inclusion, as this process becomes easier and more engaging.
{"title":"Using Games to Practice Screen Reader Gestures","authors":"Gonçalo Ferreira Lobo, David Gonçalves, Pedro Pais, Tiago Guerreiro, André Rodrigues","doi":"10.1145/3441852.3476556","DOIUrl":"https://doi.org/10.1145/3441852.3476556","url":null,"abstract":"Smartphones are shipped with built-in screen readers and other accessibility features that enable blind people to autonomously learn and interact with the device. However, the process is not seamless, and many face difficulties in the adoption and path to becoming more experienced users. In the past, games like Minesweeper served to introduce and train people in the use of the mouse, from its left and right click to precision pointing required to play the game. Smartphone gestures (and particularly screen reader gestures) pose similar challenges to the ones first faced by mouse users. In this work, we explore the use of games to inconspicuously train gestures. We designed and developed a set of accessible games, enabling users to practice smartphone gestures. We evaluated the games with 8 blind users and conducted remote interviews. Our results show how purposeful accessible games could be important in the process of training and discovering smartphone gestures, as they offer a playful method of learning. This, in turn, increases autonomy and inclusion, as this process becomes easier and more engaging.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131421344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our experiences supporting web accessibility planning among a group of older adult online content creators. We highlight challenges we encountered meeting the web accessibility informational needs of our partners and helping this group of creators become aware and put in place measures to address accessibility issues. Our reflections highlight opportunities for future efforts to improve web accessibility support for everyday content creators and support for helping those less familiar with web accessibility options.
{"title":"A Case for Making Web Accessibility Guidelines Accessible: Older Adult Content Creators and Web Accessibility Planning","authors":"Aqueasha Martin-Hammond, Ulka Patil, Barsa Tandukar","doi":"10.1145/3441852.3476472","DOIUrl":"https://doi.org/10.1145/3441852.3476472","url":null,"abstract":"This paper presents our experiences supporting web accessibility planning among a group of older adult online content creators. We highlight challenges we encountered meeting the web accessibility informational needs of our partners and helping this group of creators become aware and put in place measures to address accessibility issues. Our reflections highlight opportunities for future efforts to improve web accessibility support for everyday content creators and support for helping those less familiar with web accessibility options.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"700 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133322699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Claudia Chen, R. Wu, Hashim Khan, K. Truong, Fanny Chevalier
People with chronic obstructive pulmonary disease (COPD) experience dyspnea and dyspnea-related distress and anxiety (DDA) upon physical exertion. When performing supervised exercise, measurement of physiological data, such as heart rate (HR) and blood oxygen saturation (O2sat), are commonly used for safety, but the impacts of such monitoring on their perceptions and behaviour have not previously been studied. This paper investigates the effect of presenting live physiological data to people with COPD during exercise with a focus on its impact on perceptions of dyspnea intensity (DI) and DDA. Informed by formative interviews with 15 people with COPD, we design VIDDE, an exercise companion tool visualizing live data from a pulse oximeter, and evaluate its effect on DI and DDA through case studies involving 3 participants with COPD exercising at their homes. We also conducted design probe interviews with 6 more participants with COPD to investigate their needs and design requirements for an exercise-companion application that featured physiological data monitoring. Our results suggest that presenting live physiological data during exercise is of value, and can contribute to reduced DDA, better understanding of breathlessness sensations, while providing sufficient reassurance to encourage physical activity.
{"title":"VIDDE: Visualizations for Helping People with COPD Interpret Dyspnea During Exercise","authors":"Claudia Chen, R. Wu, Hashim Khan, K. Truong, Fanny Chevalier","doi":"10.1145/3441852.3471204","DOIUrl":"https://doi.org/10.1145/3441852.3471204","url":null,"abstract":"People with chronic obstructive pulmonary disease (COPD) experience dyspnea and dyspnea-related distress and anxiety (DDA) upon physical exertion. When performing supervised exercise, measurement of physiological data, such as heart rate (HR) and blood oxygen saturation (O2sat), are commonly used for safety, but the impacts of such monitoring on their perceptions and behaviour have not previously been studied. This paper investigates the effect of presenting live physiological data to people with COPD during exercise with a focus on its impact on perceptions of dyspnea intensity (DI) and DDA. Informed by formative interviews with 15 people with COPD, we design VIDDE, an exercise companion tool visualizing live data from a pulse oximeter, and evaluate its effect on DI and DDA through case studies involving 3 participants with COPD exercising at their homes. We also conducted design probe interviews with 6 more participants with COPD to investigate their needs and design requirements for an exercise-companion application that featured physiological data monitoring. Our results suggest that presenting live physiological data during exercise is of value, and can contribute to reduced DDA, better understanding of breathlessness sensations, while providing sufficient reassurance to encourage physical activity.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133507450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tiffany Thang, Alice Liang, Yechan Choi, Adrian Parrales, Sara H. Kuang, S. Kurniawan, Heather A. Perez
Due to the COVID-19 pandemic, essential services and support for individuals with ASD have had to transition to telehealth and virtual technologies. While these technologies have been recommended for use to continue provision of essential services to this population, it has yet to be understood what impact it has had on essential service providers and adults with ASD. This experience report provides insight from essential service providers and adults with ASD from a community center for adults with developmental disabilities to understand their experiences in providing and accessing mental health, and community and vocational support during the COVID-19 pandemic through telehealth and virtual technologies.
{"title":"Providing and Accessing Support During the COVID-19 Pandemic: Experiences of Mental Health Professionals, Community and Vocational Support Providers, and Adults with ASD","authors":"Tiffany Thang, Alice Liang, Yechan Choi, Adrian Parrales, Sara H. Kuang, S. Kurniawan, Heather A. Perez","doi":"10.1145/3441852.3476470","DOIUrl":"https://doi.org/10.1145/3441852.3476470","url":null,"abstract":"Due to the COVID-19 pandemic, essential services and support for individuals with ASD have had to transition to telehealth and virtual technologies. While these technologies have been recommended for use to continue provision of essential services to this population, it has yet to be understood what impact it has had on essential service providers and adults with ASD. This experience report provides insight from essential service providers and adults with ASD from a community center for adults with developmental disabilities to understand their experiences in providing and accessing mental health, and community and vocational support during the COVID-19 pandemic through telehealth and virtual technologies.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115323850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica G. J. Vuijk, James Gay, M. Plaisier, A. Kappers, A. Theil
Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.
{"title":"PatRec: A Mobile Game for Learning Social Haptic Communication","authors":"Jessica G. J. Vuijk, James Gay, M. Plaisier, A. Kappers, A. Theil","doi":"10.1145/3441852.3476563","DOIUrl":"https://doi.org/10.1145/3441852.3476563","url":null,"abstract":"Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115754756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dayanlee De León Cordero, Christopher Ayala, Patricia Ordóñez
Most computer programs are designed in a way that requires interaction based on finger movements and hand gestures. This type of environment, which assumes the dexterity of human hands, presents limitations for those with motor disabilities. This limitation excludes this population from learning to code and, for those who develop a musculoskeletal disorder in later stages, could jeopardize their programming careers. The objective of this research is to design a Voice User Interface (or VUI for its acronym in English) that allows the use of the user’s voice to program in an Integrated Development Environment (or IDE for its acronym in English). For this, a basic programming structure was defined using Alexa Skills, which allows the user to declare variables, print values, solve basic mathematical expressions, insert conditional expressions, and create loops. An online text editor was created using CodeMirror to run user input using the Python programming language. However, the results could not yet be evaluated since the application does not have a compiler integrated. In future work it is desired to add the compiler, and thus to be able to execute the user’s program in the online editor. The aim is to also add the ability to edit, debug and move the cursor using the Alexa Skill.
{"title":"Kavita Project: Voice Programming for People with Motor Disabilities","authors":"Dayanlee De León Cordero, Christopher Ayala, Patricia Ordóñez","doi":"10.1145/3441852.3476516","DOIUrl":"https://doi.org/10.1145/3441852.3476516","url":null,"abstract":"Most computer programs are designed in a way that requires interaction based on finger movements and hand gestures. This type of environment, which assumes the dexterity of human hands, presents limitations for those with motor disabilities. This limitation excludes this population from learning to code and, for those who develop a musculoskeletal disorder in later stages, could jeopardize their programming careers. The objective of this research is to design a Voice User Interface (or VUI for its acronym in English) that allows the use of the user’s voice to program in an Integrated Development Environment (or IDE for its acronym in English). For this, a basic programming structure was defined using Alexa Skills, which allows the user to declare variables, print values, solve basic mathematical expressions, insert conditional expressions, and create loops. An online text editor was created using CodeMirror to run user input using the Python programming language. However, the results could not yet be evaluated since the application does not have a compiler integrated. In future work it is desired to add the compiler, and thus to be able to execute the user’s program in the online editor. The aim is to also add the ability to edit, debug and move the cursor using the Alexa Skill.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124029757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}