Screen reader plugins are small pieces of code that blind users can download and install to enhance the capabilities of their screen readers. In this paper, we aim to understand the user experience of screen readers’ plugins, as well as their developers, distribution model, and maintenance. To this end, we conducted a study with 14 blind screen reader users. Our study revealed that screen reader users rely on plugins for various reasons, e.g., to improve the usability of both screen readers and application software, to make partially accessible applications accessible, and to enable custom shortcuts and commands. Furthermore, installing plugins is easy; uninstalling them is unlikely; and finding them online is ad hoc, challenging, and poses security threats. In addition, developing screen reader plugins is technically demanding; only a handful of people develop plugins, and they are well-recognized in the community. Finally, there is no central repository for plugins for most screen readers, and most plugins do not receive updates from their developers and become obsolete. The lack of financial incentives plays in the slow growth of the plugin ecosystem. Based on our findings, we recommend creating a central repository for all plugins, engaging third-party developers, and raising general awareness about the benefits and dangers of plugins. We believe our findings will inspire researchers to embrace the plugin-based distribution model as an effective way to combat application-level accessibility issues.
{"title":"Understanding Screen Readers’ Plugins","authors":"Farhani Momotaz, Md. Touhidul Islam, Md Ehtesham-Ul-Haque, Syed Masum Billah","doi":"10.1145/3441852.3471205","DOIUrl":"https://doi.org/10.1145/3441852.3471205","url":null,"abstract":"Screen reader plugins are small pieces of code that blind users can download and install to enhance the capabilities of their screen readers. In this paper, we aim to understand the user experience of screen readers’ plugins, as well as their developers, distribution model, and maintenance. To this end, we conducted a study with 14 blind screen reader users. Our study revealed that screen reader users rely on plugins for various reasons, e.g., to improve the usability of both screen readers and application software, to make partially accessible applications accessible, and to enable custom shortcuts and commands. Furthermore, installing plugins is easy; uninstalling them is unlikely; and finding them online is ad hoc, challenging, and poses security threats. In addition, developing screen reader plugins is technically demanding; only a handful of people develop plugins, and they are well-recognized in the community. Finally, there is no central repository for plugins for most screen readers, and most plugins do not receive updates from their developers and become obsolete. The lack of financial incentives plays in the slow growth of the plugin ecosystem. Based on our findings, we recommend creating a central repository for all plugins, engaging third-party developers, and raising general awareness about the benefits and dangers of plugins. We believe our findings will inspire researchers to embrace the plugin-based distribution model as an effective way to combat application-level accessibility issues.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126792204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Kelly, M. Weech, R. Fallaize, R. Z. Franco, F. Hwang, J. Lovegrove
Evidence suggests that, compared with younger users, older adults benefit from additional support when engaging with new apps. This paper presents a remote observation study of 15 UK older adults (aged >65 years) using eNutri, a web-based dietary assessment app. The results highlight the importance placed by older adults on having good instructions and support when learning and using apps, and suggest that including features such as instructional videos, “contact us” information, and explicit guidance on “commonly-known” features may be important for this population. The study also found heterogeneity within the group in terms of app delivery preferences (smartphone vs web-based), which serves as a reminder that when designing apps for older adults, it may be helpful to bear in mind the variation in people's use of and comfort with types of devices.
{"title":"Designing Apps to Support Engagement by Older Adults: A think-aloud study of the eNutri dietary-intake assessment web app","authors":"E. Kelly, M. Weech, R. Fallaize, R. Z. Franco, F. Hwang, J. Lovegrove","doi":"10.1145/3441852.3476537","DOIUrl":"https://doi.org/10.1145/3441852.3476537","url":null,"abstract":"Evidence suggests that, compared with younger users, older adults benefit from additional support when engaging with new apps. This paper presents a remote observation study of 15 UK older adults (aged >65 years) using eNutri, a web-based dietary assessment app. The results highlight the importance placed by older adults on having good instructions and support when learning and using apps, and suggest that including features such as instructional videos, “contact us” information, and explicit guidance on “commonly-known” features may be important for this population. The study also found heterogeneity within the group in terms of app delivery preferences (smartphone vs web-based), which serves as a reminder that when designing apps for older adults, it may be helpful to bear in mind the variation in people's use of and comfort with types of devices.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116548327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose a gaze-based interactive system for metro map named GazeMetro, which helps explore and interact with the metro map only by eye movements, providing a new experience of interaction. The objective of GazeMetro is to provide metro map viewers with gaze-based interactions to search the metro map without other manual operations. We implement GazeMetro with 4 gaze-based interaction techniques which are Gaze Fisheye, Gaze Scaling and Panning, Gaze Selection and Gaze Hint. We conducted an experiment to evaluate GazeMetro and the results showed a positive evaluation in pragmatic quality and especially in hedonic quality.
{"title":"GazeMetro: A Gaze-Based Interactive System for Metro Map","authors":"Yaqi Xie, Hao Wang, Chaoquan Luo, Zhuo Yang, Yinwei Zhan","doi":"10.1145/3441852.3476569","DOIUrl":"https://doi.org/10.1145/3441852.3476569","url":null,"abstract":"In this paper, we propose a gaze-based interactive system for metro map named GazeMetro, which helps explore and interact with the metro map only by eye movements, providing a new experience of interaction. The objective of GazeMetro is to provide metro map viewers with gaze-based interactions to search the metro map without other manual operations. We implement GazeMetro with 4 gaze-based interaction techniques which are Gaze Fisheye, Gaze Scaling and Panning, Gaze Selection and Gaze Hint. We conducted an experiment to evaluate GazeMetro and the results showed a positive evaluation in pragmatic quality and especially in hedonic quality.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"28 52","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134413046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile robots have been shown to be helpful in guiding users in complex indoor spaces. While these robots can assist all types of users, current implementations often rely on users visually rendezvousing with the robot, which may be a challenge for people with visual impairments. This paper describes a proof of concept for a robotic system that addresses this kind of short-range rendezvous for users with visual impairments. We propose to use a lattice graph-based Anytime Repairing A* (ARA*) planner as a global planner to discourage the robot from turning in place at its goal position, making its path more human-like and safer. We also interviewed an Orientation & Mobility (O&M) Specialist for their thoughts on our planner. They observed that our planner produces less obtrusive trajectories to the user than the ROS default global planner and recommended that our system should allow the robot to approach the person from the side as opposed to the front as it currently does. In the future, we plan to test our system with users in-person to better validate our assumptions and find additional pain points.
{"title":"Robot Trajectories When Approaching a User with a Visual Impairment","authors":"Jirachaya \"Fern\" Limprayoon, Prithu Pareek, Xiang Zhi Tan, Aaron Steinfeld","doi":"10.1145/3441852.3476538","DOIUrl":"https://doi.org/10.1145/3441852.3476538","url":null,"abstract":"Mobile robots have been shown to be helpful in guiding users in complex indoor spaces. While these robots can assist all types of users, current implementations often rely on users visually rendezvousing with the robot, which may be a challenge for people with visual impairments. This paper describes a proof of concept for a robotic system that addresses this kind of short-range rendezvous for users with visual impairments. We propose to use a lattice graph-based Anytime Repairing A* (ARA*) planner as a global planner to discourage the robot from turning in place at its goal position, making its path more human-like and safer. We also interviewed an Orientation & Mobility (O&M) Specialist for their thoughts on our planner. They observed that our planner produces less obtrusive trajectories to the user than the ROS default global planner and recommended that our system should allow the robot to approach the person from the side as opposed to the front as it currently does. In the future, we plan to test our system with users in-person to better validate our assumptions and find additional pain points.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133687384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauren Race, Amber James, A. Hayward, K. El-Amin, Maya Gold Patterson, Theresa Mershon
Sensory guides and social narratives are learning tools that provide sensory and social support to neurodivergent individuals. These tools—and their design guidelines—have historically been developed for physical environments, such as museums and classrooms. They lack support for social media environments, where sensory stimuli and social contexts can be complex and uncertain. We address these challenges by designing a novel social media sensory guide and social narrative, specifically adapted for social media interaction. We leverage our use case, Twitter Spaces—an audio-only conversation feature in beta. The goal of this pilot study is to determine whether neurodivergent users want sensory guides and social narratives adapted for social media, and if users find them helpful in setting expectations for social media interaction. We evaluate these tools with eight neurodivergent Twitter users, using tasks and thinking aloud. Results indicate a strong potential for adoption of both tools among neurodivergent individuals to reduce overstimulation in social media environments.
{"title":"Designing Sensory and Social Tools for Neurodivergent Individuals in Social Media Environments","authors":"Lauren Race, Amber James, A. Hayward, K. El-Amin, Maya Gold Patterson, Theresa Mershon","doi":"10.1145/3441852.3476546","DOIUrl":"https://doi.org/10.1145/3441852.3476546","url":null,"abstract":"Sensory guides and social narratives are learning tools that provide sensory and social support to neurodivergent individuals. These tools—and their design guidelines—have historically been developed for physical environments, such as museums and classrooms. They lack support for social media environments, where sensory stimuli and social contexts can be complex and uncertain. We address these challenges by designing a novel social media sensory guide and social narrative, specifically adapted for social media interaction. We leverage our use case, Twitter Spaces—an audio-only conversation feature in beta. The goal of this pilot study is to determine whether neurodivergent users want sensory guides and social narratives adapted for social media, and if users find them helpful in setting expectations for social media interaction. We evaluate these tools with eight neurodivergent Twitter users, using tasks and thinking aloud. Results indicate a strong potential for adoption of both tools among neurodivergent individuals to reduce overstimulation in social media environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132707130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work focuses on the use of adaptive sound feedback on mobile devices in mobility contexts characterized by external noise. The noise masks the device feedback, degrading the information it contains and preventing it from being fully perceived and understood. This leads to errors in the interaction with the device or requires the feedback to be repeated. Therefore compensation techniques are necessary in order to make the provided feedback audible and thus make the interaction with the device easier. As an initial research task, we experimented with compensation techniques on verbal information. Preliminary results indicate that increase in volume or adaptive equalization can improve the percentage of information understood without altering the intrusiveness of the compensated verbal instructions. Currently we are exploring similar compensation techniques for audio feedback based on sonification in order to make the information provided by modulating sound properties more understandable in a noisy context and thus enable reliable interaction with the user. For example, it would be desirable to apply effective compensations to instructions provided through sonification by turn-by-turn navigation assistants for people with visual impairments in order to make the navigation feasible in mobility contexts characterized by background noise.
{"title":"Auditory feedback to compensate audible instructions to support people with visual impairment","authors":"Gabriele Galimberti","doi":"10.1145/3441852.3476477","DOIUrl":"https://doi.org/10.1145/3441852.3476477","url":null,"abstract":"This work focuses on the use of adaptive sound feedback on mobile devices in mobility contexts characterized by external noise. The noise masks the device feedback, degrading the information it contains and preventing it from being fully perceived and understood. This leads to errors in the interaction with the device or requires the feedback to be repeated. Therefore compensation techniques are necessary in order to make the provided feedback audible and thus make the interaction with the device easier. As an initial research task, we experimented with compensation techniques on verbal information. Preliminary results indicate that increase in volume or adaptive equalization can improve the percentage of information understood without altering the intrusiveness of the compensated verbal instructions. Currently we are exploring similar compensation techniques for audio feedback based on sonification in order to make the information provided by modulating sound properties more understandable in a noisy context and thus enable reliable interaction with the user. For example, it would be desirable to apply effective compensations to instructions provided through sonification by turn-by-turn navigation assistants for people with visual impairments in order to make the navigation feasible in mobility contexts characterized by background noise.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114523975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Venkatasubramanian, Jeanine L. M. Skorinko, N. Jutras, Natalia Carvajal Erker, Lara Padir
Incidents of abuse committed against persons with intellectual and developmental disabilities (I/DD) are woefully under-reported. One way of helping change this situation is to empower persons with I/DD with tools to self-report abuse. During abuse reporting the reporter is requested to provide a variety of information about the abuse and its context. In this paper wanted to understand which pieces of information are typically needed to successfully report abuse and whether persons with I/DD can provide them. Consequently, we conducted an exploratory survey of the staff at an adult protective services agency in our region and asked them about their experiences with receiving abuse self-reports by persons with I/DD. Overall, we found that persons with I/DD are typically able to provide enough information to successfully self-report abuse.
{"title":"Exploring the Requirements of Abuse Reporting for Persons with Intellectual and Developmental Disabilities","authors":"K. Venkatasubramanian, Jeanine L. M. Skorinko, N. Jutras, Natalia Carvajal Erker, Lara Padir","doi":"10.1145/3441852.3476520","DOIUrl":"https://doi.org/10.1145/3441852.3476520","url":null,"abstract":"Incidents of abuse committed against persons with intellectual and developmental disabilities (I/DD) are woefully under-reported. One way of helping change this situation is to empower persons with I/DD with tools to self-report abuse. During abuse reporting the reporter is requested to provide a variety of information about the abuse and its context. In this paper wanted to understand which pieces of information are typically needed to successfully report abuse and whether persons with I/DD can provide them. Consequently, we conducted an exploratory survey of the staff at an adult protective services agency in our region and asked them about their experiences with receiving abuse self-reports by persons with I/DD. Overall, we found that persons with I/DD are typically able to provide enough information to successfully self-report abuse.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128706314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui Zheng, Pattiya Mahapasuthanon, Yujing Chen, Huzefa Rangwala, A. Evmenova, V. Motti
Data-driven assistive wearable technologies are promising to support young adults with neurodiversity in inclusive education. Existing datasets focus on wearable sensor data of various activities from neurotypical people. However, no dataset exists including learning-related activity data from individuals with neurodiversity. The contributions of this paper include (1) WLA4ND, a dataset of learning activities performed by eight young adults with neurodiversity collected from smartwatch sensors. The activities are common learning tasks, including reading, writing, typing, answering follow-up questions, and off-task. (2) Evaluation of classification on WLA4ND with five activity recognition models, including conventional and deep learning methods. The Convolutional Recurrent Neural Network (CRNN) model achieved a balanced accuracy of 92.2% for user-dependent evaluations, while Federated Multi-Task Hierarchical Attention Model (FATHOM) achieved 91.8% for user-independent evaluations. This evaluation demonstrates that existing activity recognition technologies can be applied to neurodiverse populations. Also, WLA4ND can be used by researchers as a complement for activity recognition, automatic labeling, and next-generation assistive wearable applications.
{"title":"WLA4ND: a Wearable Dataset of Learning Activities for Young Adults with Neurodiversity to Provide Support in Education","authors":"Hui Zheng, Pattiya Mahapasuthanon, Yujing Chen, Huzefa Rangwala, A. Evmenova, V. Motti","doi":"10.1145/3441852.3471220","DOIUrl":"https://doi.org/10.1145/3441852.3471220","url":null,"abstract":"Data-driven assistive wearable technologies are promising to support young adults with neurodiversity in inclusive education. Existing datasets focus on wearable sensor data of various activities from neurotypical people. However, no dataset exists including learning-related activity data from individuals with neurodiversity. The contributions of this paper include (1) WLA4ND, a dataset of learning activities performed by eight young adults with neurodiversity collected from smartwatch sensors. The activities are common learning tasks, including reading, writing, typing, answering follow-up questions, and off-task. (2) Evaluation of classification on WLA4ND with five activity recognition models, including conventional and deep learning methods. The Convolutional Recurrent Neural Network (CRNN) model achieved a balanced accuracy of 92.2% for user-dependent evaluations, while Federated Multi-Task Hierarchical Attention Model (FATHOM) achieved 91.8% for user-independent evaluations. This evaluation demonstrates that existing activity recognition technologies can be applied to neurodiverse populations. Also, WLA4ND can be used by researchers as a complement for activity recognition, automatic labeling, and next-generation assistive wearable applications.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121919692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ather Sharif, Paari Gopal, Michael Saugstad, Shiven Bhatt, Raymond Fok, Galen Cassebeer Weld, Kavi Dey, Jon E. Froehlich
How do sidewalks change over time? Are there geographic or socioeconomic patterns to this change? These questions are important but difficult to address with current GIS tools and techniques. In this demo paper, we introduce three preliminary crowd+AI (Artificial Intelligence) prototypes to track changes in street intersection accessibility over time—specifically, curb ramps—and report on results from a pilot usability study.
{"title":"Experimental Crowd+AI Approaches to Track Accessibility Features in Sidewalk Intersections Over Time","authors":"Ather Sharif, Paari Gopal, Michael Saugstad, Shiven Bhatt, Raymond Fok, Galen Cassebeer Weld, Kavi Dey, Jon E. Froehlich","doi":"10.1145/3441852.3476549","DOIUrl":"https://doi.org/10.1145/3441852.3476549","url":null,"abstract":"How do sidewalks change over time? Are there geographic or socioeconomic patterns to this change? These questions are important but difficult to address with current GIS tools and techniques. In this demo paper, we introduce three preliminary crowd+AI (Artificial Intelligence) prototypes to track changes in street intersection accessibility over time—specifically, curb ramps—and report on results from a pilot usability study.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121760771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kelly Avery Mack, Maitraye Das, D. Jain, Danielle Bragg, John Tang, Andrew Begel, Erin Beneteau, J. Davis, Abraham Glasser, J. Park, Venkatesh Potluri
The COVID-19 pandemic forced many people to convert their daily work lives to a “virtual” format where everyone connected remotely from their home. In this new, virtual environment, accessibility barriers changed, in some respects for the better (e.g., more flexibility) and in other aspects, for the worse (e.g., problems including American Sign Language interpreters over video calls). Microsoft Research held its first cohort of all virtual interns in 2020. We the authors, full time and intern members and affiliates of the Ability Team, a research team focused on accessibility, reflect on our virtual work experiences as a team consisting of members with a variety of abilities, positions, and seniority during the summer intern season. Through our autoethnographic method, we provide a nuanced view into the experiences of a mixed-ability, virtual team, and how the virtual setting affected the team’s accessibility. We then reflect on these experiences, noting the successful strategies we used to promote access and the areas in which we could have further improved access. Finally, we present guidelines for future virtual mixed-ability teams looking to improve access.
{"title":"Mixed Abilities and Varied Experiences: a group autoethnography of a virtual summer internship","authors":"Kelly Avery Mack, Maitraye Das, D. Jain, Danielle Bragg, John Tang, Andrew Begel, Erin Beneteau, J. Davis, Abraham Glasser, J. Park, Venkatesh Potluri","doi":"10.1145/3441852.3471199","DOIUrl":"https://doi.org/10.1145/3441852.3471199","url":null,"abstract":"The COVID-19 pandemic forced many people to convert their daily work lives to a “virtual” format where everyone connected remotely from their home. In this new, virtual environment, accessibility barriers changed, in some respects for the better (e.g., more flexibility) and in other aspects, for the worse (e.g., problems including American Sign Language interpreters over video calls). Microsoft Research held its first cohort of all virtual interns in 2020. We the authors, full time and intern members and affiliates of the Ability Team, a research team focused on accessibility, reflect on our virtual work experiences as a team consisting of members with a variety of abilities, positions, and seniority during the summer intern season. Through our autoethnographic method, we provide a nuanced view into the experiences of a mixed-ability, virtual team, and how the virtual setting affected the team’s accessibility. We then reflect on these experiences, noting the successful strategies we used to promote access and the areas in which we could have further improved access. Finally, we present guidelines for future virtual mixed-ability teams looking to improve access.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123940805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}