Meal preparation is a complex multisensorial task that requires many decisions to be made based on the appearance of the dish. This alienates individuals with low vision and makes cooking meals independently inaccessible. Products designed for individuals with low vision rarely aid with tasks that involve application of heat. As people with vision impairments have different requirements for technology, it is imperative that the behaviours and problems faced are thoroughly understood. A study to understand how users perform tasks involving heat application was conducted. Four cooking techniques commonly used to prepare Indian dishes were identified and interviews were carried out with a diverse group of visually impaired persons (n=12). The findings include insights about behaviours, problems and strategies employed by visually impaired persons while preparing meals using the following techniques: Boiling, Simmering, Roasting, and Frying. This work describes factors that affect behaviour during meal preparation by Indian visually impaired persons, and the various strategies used to mitigate challenges faced. The findings have been used to propose a set of considerations that have implications on the design of accessibility tools such as assistive devices, rehabilitation programs and strategies.
{"title":"Behaviors, Problems and Strategies of Visually Impaired Persons During Meal Preparation in the Indian Context : Challenges and Opportunities for Design","authors":"Avyay Ravi Kashyap","doi":"10.1145/3373625.3417083","DOIUrl":"https://doi.org/10.1145/3373625.3417083","url":null,"abstract":"Meal preparation is a complex multisensorial task that requires many decisions to be made based on the appearance of the dish. This alienates individuals with low vision and makes cooking meals independently inaccessible. Products designed for individuals with low vision rarely aid with tasks that involve application of heat. As people with vision impairments have different requirements for technology, it is imperative that the behaviours and problems faced are thoroughly understood. A study to understand how users perform tasks involving heat application was conducted. Four cooking techniques commonly used to prepare Indian dishes were identified and interviews were carried out with a diverse group of visually impaired persons (n=12). The findings include insights about behaviours, problems and strategies employed by visually impaired persons while preparing meals using the following techniques: Boiling, Simmering, Roasting, and Frying. This work describes factors that affect behaviour during meal preparation by Indian visually impaired persons, and the various strategies used to mitigate challenges faced. The findings have been used to propose a set of considerations that have implications on the design of accessibility tools such as assistive devices, rehabilitation programs and strategies.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132040855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Games bring people together in immersive and challenging interactions. In this paper, we share multiplayer gaming experiences of people with visual impairments collected from interviews with 10 adults and 10 minors, and 140 responses to an online survey. We include the perspectives of 17 sighted people who play with someone who has a visual impairment, collected in a second online survey. Our focus is on group play, particularly on the problems and opportunities that arise from mixed-visual-ability scenarios. These show that people with visual impairments are playing diverse games, but face limitations in playing with others who have different visual abilities. What stands out is the lack of intersection in gaming opportunities, and consequently, in habits and interests of people with different visual abilities. We highlight barriers associated with these experiences beyond inaccessibility issues and discuss implications and opportunities for the design of mixed-ability gaming.
{"title":"Playing With Others: Depicting Multiplayer Gaming Experiences of People With Visual Impairments","authors":"David Gonçalves, André Rodrigues, Tiago Guerreiro","doi":"10.1145/3373625.3418304","DOIUrl":"https://doi.org/10.1145/3373625.3418304","url":null,"abstract":"Games bring people together in immersive and challenging interactions. In this paper, we share multiplayer gaming experiences of people with visual impairments collected from interviews with 10 adults and 10 minors, and 140 responses to an online survey. We include the perspectives of 17 sighted people who play with someone who has a visual impairment, collected in a second online survey. Our focus is on group play, particularly on the problems and opportunities that arise from mixed-visual-ability scenarios. These show that people with visual impairments are playing diverse games, but face limitations in playing with others who have different visual abilities. What stands out is the lack of intersection in gaming opportunities, and consequently, in habits and interests of people with different visual abilities. We highlight barriers associated with these experiences beyond inaccessibility issues and discuss implications and opportunities for the design of mixed-ability gaming.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132263853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite a global upward trend in mobile device ownership, older adults continue to use few applications and fewer features. For example, besides directions, maps provide information about public transit, traffic, and amenities. Mobile maps can assist older adults to navigate independently, avail city facilities, and explore new places. But how accessible are current mobile maps to older adults? In this paper, we present results from a qualitative study examining how older adults use mobile maps and the difficulties they encounter. 172 problems were identified and categorized across 17 older adults (ages 60+). Results indicate that non-motor issues were more difficult to mitigate than motor issues and led to maximum frustration and resignation. These non-motor issues stemmed from three factors, inadequate visual saliency, ambiguous affordances, and low information scent, making it difficult for older adults to notice, use, and infer, respectively. Two design solutions are proposed to address these non-motor issues.
{"title":"“Maps are hard for me”: Identifying How Older Adults Struggle with Mobile Maps","authors":"Ja Eun Yu, Debaleena Chattopadhyay","doi":"10.1145/3373625.3416997","DOIUrl":"https://doi.org/10.1145/3373625.3416997","url":null,"abstract":"Despite a global upward trend in mobile device ownership, older adults continue to use few applications and fewer features. For example, besides directions, maps provide information about public transit, traffic, and amenities. Mobile maps can assist older adults to navigate independently, avail city facilities, and explore new places. But how accessible are current mobile maps to older adults? In this paper, we present results from a qualitative study examining how older adults use mobile maps and the difficulties they encounter. 172 problems were identified and categorized across 17 older adults (ages 60+). Results indicate that non-motor issues were more difficult to mitigate than motor issues and led to maximum frustration and resignation. These non-motor issues stemmed from three factors, inadequate visual saliency, ambiguous affordances, and low information scent, making it difficult for older adults to notice, use, and infer, respectively. Two design solutions are proposed to address these non-motor issues.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114064562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lawrence H. Kim, Abena Boadi-Agyemang, A. Siu, John C. Tang
Social media platforms facilitate communication through sharing photos and videos. The abundance of visual content creates accessibility issues, particularly for people who are blind or have low vision. While assistive technologies like screen readers can help when alt-text for images is provided, synthesized voices lack the human element that is important for social interaction. Here, we investigate when it makes the most sense to use human narration as opposed to a screen reader to describe photos in a social media context. We explore the effects of voice familiarity (i.e., whether you hear the voice of someone you know) and the perspective of the description (i.e., first vs. third person point-of-view (POV)). Preliminary study suggests that users prefer hearing from a person they know when the content is described in first person POV, whereas synthesized voice is preferred for content described in third person POV.
{"title":"When to Add Human Narration to Photo-Sharing Social Media","authors":"Lawrence H. Kim, Abena Boadi-Agyemang, A. Siu, John C. Tang","doi":"10.1145/3373625.3418013","DOIUrl":"https://doi.org/10.1145/3373625.3418013","url":null,"abstract":"Social media platforms facilitate communication through sharing photos and videos. The abundance of visual content creates accessibility issues, particularly for people who are blind or have low vision. While assistive technologies like screen readers can help when alt-text for images is provided, synthesized voices lack the human element that is important for social interaction. Here, we investigate when it makes the most sense to use human narration as opposed to a screen reader to describe photos in a social media context. We explore the effects of voice familiarity (i.e., whether you hear the voice of someone you know) and the perspective of the description (i.e., first vs. third person point-of-view (POV)). Preliminary study suggests that users prefer hearing from a person they know when the content is described in first person POV, whereas synthesized voice is preferred for content described in third person POV.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116340443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate SoundLines, a mobile app designed to support children with visual impairments in exercising spatial exploration skills. This is achieved through multi-touch discovery of line segments on touchscreen, supported by sonification feedback. The approach is implemented as a game in which the child needs to guide a kitten to find its mother cat by following with a finger the line connecting them.
{"title":"SoundLines: Exploration of Line Segments through Sonification and Multi-touch Interaction","authors":"D. Ahmetovic, C. Bernareggi, S. Mascetti, F. Pini","doi":"10.1145/3373625.3418041","DOIUrl":"https://doi.org/10.1145/3373625.3418041","url":null,"abstract":"We demonstrate SoundLines, a mobile app designed to support children with visual impairments in exercising spatial exploration skills. This is achieved through multi-touch discovery of line segments on touchscreen, supported by sonification feedback. The approach is implemented as a game in which the child needs to guide a kitten to find its mother cat by following with a finger the line connecting them.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125431373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on utilizing combination of haptics stimuli and auditory clarification to elucidate statistical information, specifically line chart, to person with visual impairment. Past research has explored varieties of vision substitute methods to depict shape and value information of line charts. It was identified that even though the general trend could be well interpreted, individual data values were not sufficiently perceived. This paper proposes a statistic orientated approach, instead of reconstructing the shape of charts, it adopts a dimensionality reduction strategy to split 2D information into bidirectional haptics and linear movements. Explicit voiceover of data values would be provided based on one-dimensional finger movement to assist graph interpretation. Our evaluation study showed that such an approach enabled users to efficiently decipher line chart information with an appropriate cognitive demand and high data interpretation accuracy.
{"title":"Tapsonic: One Dimensional Finger Mounted Multimodal Line Chart Reader","authors":"Zeyuan Zhang","doi":"10.1145/3373625.3417075","DOIUrl":"https://doi.org/10.1145/3373625.3417075","url":null,"abstract":"This paper focuses on utilizing combination of haptics stimuli and auditory clarification to elucidate statistical information, specifically line chart, to person with visual impairment. Past research has explored varieties of vision substitute methods to depict shape and value information of line charts. It was identified that even though the general trend could be well interpreted, individual data values were not sufficiently perceived. This paper proposes a statistic orientated approach, instead of reconstructing the shape of charts, it adopts a dimensionality reduction strategy to split 2D information into bidirectional haptics and linear movements. Explicit voiceover of data values would be provided based on one-dimensional finger movement to assist graph interpretation. Our evaluation study showed that such an approach enabled users to efficiently decipher line chart information with an appropriate cognitive demand and high data interpretation accuracy.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125943643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the wide majority of Europeans uses smartphones. However, touch displays are still not accessible by everyone. Individuals with deafblindness, for example, often face difficulties in accessing vision-based touchscreens. Moreover, they typically have few financial resources which increases the need for customizable, low-cost assistive devices. In this work-in-progress, we present four prototypes made from low-cost, every-day materials, that make modern pattern lock mechanisms more accessible to individuals with vision impairments or even with deafblindness. Two out of four prototypes turned out to be functional tactile overlays for accessing digital 4-by-4 grids that are regularly used to encode dynamic dot patterns. In future work, we will conduct a user study investigating whether these two prototypes can make dot-based pattern lock mechanisms more accessible for individuals with visual impairments or deafblindness.
{"title":"Exploring Low-Cost Materials to Make Pattern-Based Lock-Screens Accessible for Users with Visual Impairments or Deafblindness","authors":"Lea Buchweitz, A. Theil, James Gay, Oliver Korn","doi":"10.1145/3373625.3418020","DOIUrl":"https://doi.org/10.1145/3373625.3418020","url":null,"abstract":"Nowadays, the wide majority of Europeans uses smartphones. However, touch displays are still not accessible by everyone. Individuals with deafblindness, for example, often face difficulties in accessing vision-based touchscreens. Moreover, they typically have few financial resources which increases the need for customizable, low-cost assistive devices. In this work-in-progress, we present four prototypes made from low-cost, every-day materials, that make modern pattern lock mechanisms more accessible to individuals with vision impairments or even with deafblindness. Two out of four prototypes turned out to be functional tactile overlays for accessing digital 4-by-4 grids that are regularly used to encode dynamic dot patterns. In future work, we will conduct a user study investigating whether these two prototypes can make dot-based pattern lock mechanisms more accessible for individuals with visual impairments or deafblindness.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127707626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autistic individuals engage in sense-making as they seek to better understand themselves and relate to others within a society formed by neuro-typical social norms. Our research examines the ways in which autistic individuals engage in sense-making activities about autism on Twitter. We collected autism-oriented Twitter conversations and Twitter user profiles data of people participating in those conversation. Our research contributes empirical evidence demonstrating that that autistic sense-making on Twitter is constituted by (1) engaging in dynamic discussions of life experiences, (2) countering stigma with actions of advocacy, and (3) enacting neuro-atypical social norms.
{"title":"#ActuallyAutistic Sense-Making on Twitter","authors":"Annuska Zolyomi, Ridley Jones, Tomer Kaftan","doi":"10.1145/3373625.3418001","DOIUrl":"https://doi.org/10.1145/3373625.3418001","url":null,"abstract":"Autistic individuals engage in sense-making as they seek to better understand themselves and relate to others within a society formed by neuro-typical social norms. Our research examines the ways in which autistic individuals engage in sense-making activities about autism on Twitter. We collected autism-oriented Twitter conversations and Twitter user profiles data of people participating in those conversation. Our research contributes empirical evidence demonstrating that that autistic sense-making on Twitter is constituted by (1) engaging in dynamic discussions of life experiences, (2) countering stigma with actions of advocacy, and (3) enacting neuro-atypical social norms.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filip Bircanin, Laurianne Sitbon, Bernd Ploderer, A. Bayor, Michael Esteban, Stewart Koplick, M. Brereton
In this paper, we present a case study of the iterative design of TalkingBox, a communication device designed with a person with a severe cognitive disability and his support network. TalkingBox combines graphic symbols with tangible technology to foster the use of symbolic communication by leveraging the person's strength and interest in memory matching games. In the course of designing, trialing and iterating the TalkingBox, we discovered that the design supported not only the development of symbolic communication, but also revealed new interests and strengths of our participant. TalkingBox highlighted opportunities for interactions with peers, revealed new skills in visual discrimination, and evidenced interests. These could, in turn, support staff and family to adapt their support. More importantly, TalkingBox had become a living portfolio presenting our participant with severe disability through the lens of their strengths. We discuss opportunities for research through co-design to open new avenues for future communication technologies.
{"title":"The TalkingBox.: Revealing Strengths of Adults with Severe Cognitive Disabilities","authors":"Filip Bircanin, Laurianne Sitbon, Bernd Ploderer, A. Bayor, Michael Esteban, Stewart Koplick, M. Brereton","doi":"10.1145/3373625.3417025","DOIUrl":"https://doi.org/10.1145/3373625.3417025","url":null,"abstract":"In this paper, we present a case study of the iterative design of TalkingBox, a communication device designed with a person with a severe cognitive disability and his support network. TalkingBox combines graphic symbols with tangible technology to foster the use of symbolic communication by leveraging the person's strength and interest in memory matching games. In the course of designing, trialing and iterating the TalkingBox, we discovered that the design supported not only the development of symbolic communication, but also revealed new interests and strengths of our participant. TalkingBox highlighted opportunities for interactions with peers, revealed new skills in visual discrimination, and evidenced interests. These could, in turn, support staff and family to adapt their support. More importantly, TalkingBox had become a living portfolio presenting our participant with severe disability through the lens of their strengths. We discuss opportunities for research through co-design to open new avenues for future communication technologies.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132531709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Menzies, Benjamin M. Gorman, Garreth W. Tigwell
Within accessibility research, it is important for researchers to understand the lived experience of participants. Researchers often use in-person interviews to collect this data. However, in-person interviews can result in communication barriers and introduce logistical challenges surrounding scheduling and geographical location. For a recent study involving screen reader users, we used online chat-based platforms to conduct interviews. Unlike in-person interviews, there was little guidance within the field on conducting interviews using these platforms with screen reader users. To understand how effective these platforms were, we collected feedback from our participants on their experience after completing their interview. In this paper, we report on our experience of conducting online chat-based interviews with screen reader users. We present reflections from both the interviewer and participants on their experiences during the aforementioned study, and outline four lessons we learned during the process.
{"title":"Reflections on Using Chat-Based Platforms for Online Interviews with Screen-Reader Users","authors":"R. Menzies, Benjamin M. Gorman, Garreth W. Tigwell","doi":"10.1145/3373625.3418000","DOIUrl":"https://doi.org/10.1145/3373625.3418000","url":null,"abstract":"Within accessibility research, it is important for researchers to understand the lived experience of participants. Researchers often use in-person interviews to collect this data. However, in-person interviews can result in communication barriers and introduce logistical challenges surrounding scheduling and geographical location. For a recent study involving screen reader users, we used online chat-based platforms to conduct interviews. Unlike in-person interviews, there was little guidance within the field on conducting interviews using these platforms with screen reader users. To understand how effective these platforms were, we collected feedback from our participants on their experience after completing their interview. In this paper, we report on our experience of conducting online chat-based interviews with screen reader users. We present reflections from both the interviewer and participants on their experiences during the aforementioned study, and outline four lessons we learned during the process.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121232526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}