Kotaro Oomori, Akihisa Shitara, Tatsuya Minagawa, S. Sarcar, Yoichi Ochiai
In the midst of the coronavirus disease 2019 pandemic, online meetings are rapidly increasing. Deaf or hard of hearing (DHH) people participating in an online meeting often face difficulties in capturing the affective states of other speakers. Recent studies have shown the effectiveness of emoji-based representation of spoken text to capture such affective states. Nevertheless, in voice-only online meetings, it is still not clear how emoji-based spoken texts can assist DHH people to understand the feelings of speakers without perceiving their facial expressions. We therefore conducted a preliminary experiment to understand the effect of emoji-based text representation during voice-only online meetings by leveraging an emoji-based captioning system. Our preliminary results demonstrate the necessity of designing an advanced system to help DHH people understanding the voice-only online meetings more meaningfully.
{"title":"A Preliminary Study on Understanding Voice-only Online Meetings Using Emoji-based Captioning for Deaf or Hard of Hearing Users","authors":"Kotaro Oomori, Akihisa Shitara, Tatsuya Minagawa, S. Sarcar, Yoichi Ochiai","doi":"10.1145/3373625.3418032","DOIUrl":"https://doi.org/10.1145/3373625.3418032","url":null,"abstract":"In the midst of the coronavirus disease 2019 pandemic, online meetings are rapidly increasing. Deaf or hard of hearing (DHH) people participating in an online meeting often face difficulties in capturing the affective states of other speakers. Recent studies have shown the effectiveness of emoji-based representation of spoken text to capture such affective states. Nevertheless, in voice-only online meetings, it is still not clear how emoji-based spoken texts can assist DHH people to understand the feelings of speakers without perceiving their facial expressions. We therefore conducted a preliminary experiment to understand the effect of emoji-based text representation during voice-only online meetings by leveraging an emoji-based captioning system. Our preliminary results demonstrate the necessity of designing an advanced system to help DHH people understanding the voice-only online meetings more meaningfully.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134283500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Jain, Hung Ngo, Pratyush Patel, Steven M. Goodman, Leah Findlater, Jon E. Froehlich
Smartwatches have the potential to provide glanceable, always-available sound feedback to people who are deaf or hard of hearing. In this paper, we present a performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch+phone+cloud, and watch+cloud. While direct comparison with prior work is challenging, our results show that the best model, VGG-lite, performed similar to the state of the art for non-portable devices with an average accuracy of 81.2% (SD=5.8%) across 20 sound classes and 97.6% (SD=1.7%) across the three highest-priority sounds. For device architectures, we found that the watch+phone architecture provided the best balance between CPU, memory, network usage, and classification latency. Based on these experimental results, we built and conducted a qualitative lab evaluation of a smartwatch-based sound awareness app, called SoundWatch (Figure 1), with eight DHH participants. Qualitative findings show support for our sound awareness app but also uncover issues with misclassifications, latency, and privacy concerns. We close by offering design considerations for future wearable sound awareness technology.
{"title":"SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users","authors":"D. Jain, Hung Ngo, Pratyush Patel, Steven M. Goodman, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3373625.3416991","DOIUrl":"https://doi.org/10.1145/3373625.3416991","url":null,"abstract":"Smartwatches have the potential to provide glanceable, always-available sound feedback to people who are deaf or hard of hearing. In this paper, we present a performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch+phone+cloud, and watch+cloud. While direct comparison with prior work is challenging, our results show that the best model, VGG-lite, performed similar to the state of the art for non-portable devices with an average accuracy of 81.2% (SD=5.8%) across 20 sound classes and 97.6% (SD=1.7%) across the three highest-priority sounds. For device architectures, we found that the watch+phone architecture provided the best balance between CPU, memory, network usage, and classification latency. Based on these experimental results, we built and conducted a qualitative lab evaluation of a smartwatch-based sound awareness app, called SoundWatch (Figure 1), with eight DHH participants. Qualitative findings show support for our sound awareness app but also uncover issues with misclassifications, latency, and privacy concerns. We close by offering design considerations for future wearable sound awareness technology.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121870248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Children with autism spectrum disorder and other developmental disorders tend to have difficulty in language and communication, especially in abstract language concepts like prepositions. Existing clinically used methods of conducting therapy are difficult to conduct at home. In this paper, we try to show the design and process to translate an existing therapy technique into a playful activity for children with ASD to practice prepositions. The design is generated through a deductive process and was based in theory and expert evaluation. The aim is to increase overall compliance by making the therapy activity more playful and fun.
{"title":"Designing Playful Activities to Promote Practice of Preposition Skills for Kids with ASD","authors":"Advait Bhat","doi":"10.1145/3373625.3418004","DOIUrl":"https://doi.org/10.1145/3373625.3418004","url":null,"abstract":"Children with autism spectrum disorder and other developmental disorders tend to have difficulty in language and communication, especially in abstract language concepts like prepositions. Existing clinically used methods of conducting therapy are difficult to conduct at home. In this paper, we try to show the design and process to translate an existing therapy technique into a playful activity for children with ASD to practice prepositions. The design is generated through a deductive process and was based in theory and expert evaluation. The aim is to increase overall compliance by making the therapy activity more playful and fun.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121996824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason Wu, G. Reyes, Samuel White, Xiaoyi Zhang, Jeffrey P. Bigham
Numerous accessibility features have been developed to increase who and how people can access computing devices. Increasingly, these features are included as part of popular platforms, e.g., Apple iOS, Google Android, and Microsoft Windows. Despite their potential to improve the computing experience, many users are unaware of these features and do not know which combination of them could benefit them. In this work, we first quantified this problem by surveying 100 participants online (including 25 older adults) about their knowledge of accessibility and features that they could benefit from, showing very low awareness. We developed four prototypes spanning numerous accessibility categories (e.g., vision, hearing, motor), that embody signals and detection strategies applicable to accessibility recommendation in general. Preliminary results from a study with 20 older adults show that proactive recommendation is a promising approach for better pairing users with accessibility features they could benefit from.
{"title":"Towards Recommending Accessibility Features on Mobile Devices","authors":"Jason Wu, G. Reyes, Samuel White, Xiaoyi Zhang, Jeffrey P. Bigham","doi":"10.1145/3373625.3418007","DOIUrl":"https://doi.org/10.1145/3373625.3418007","url":null,"abstract":"Numerous accessibility features have been developed to increase who and how people can access computing devices. Increasingly, these features are included as part of popular platforms, e.g., Apple iOS, Google Android, and Microsoft Windows. Despite their potential to improve the computing experience, many users are unaware of these features and do not know which combination of them could benefit them. In this work, we first quantified this problem by surveying 100 participants online (including 25 older adults) about their knowledge of accessibility and features that they could benefit from, showing very low awareness. We developed four prototypes spanning numerous accessibility categories (e.g., vision, hearing, motor), that embody signals and detection strategies applicable to accessibility recommendation in general. Preliminary results from a study with 20 older adults show that proactive recommendation is a promising approach for better pairing users with accessibility features they could benefit from.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Barbareschi, B. Oldfrey, Long Xin, G. Magomere, Wycliffe Ambeyi Wetende, Carol Wanjira, J. Olenja, Victoria Austin, C. Holloway
Living in informality is challenging. It is even harder when you have a mobility impairment. Traditional assistive products such as wheelchairs are essential to enable people to travel. Wheelchairs are considered a Human Right. However, they are difficult to access. On the other hand, mobile phones are becoming ubiquitous and are increasingly seen as an assistive technology. Should therefore a mobile phone be considered a Human Right? To help understand the role of the mobile phone in contrast of a more traditional assistive technology – the wheelchair, we conducted contextual interviews with eight mobility impaired people who live in Kibera, a large informal settlement in Nairobi. Our findings show mobile phones act as an accessibility bridge when physical accessibility becomes too challenging. We explore our findings from two perspective – human infrastructure and interdependence, contributing an understanding of the role supported interactions play in enabling both the wheelchair and the mobile phone to be used. This further demonstrates the critical nature of designing for context and understanding the social fabric that characterizes informal settlements. It is this social fabric which enables the technology to be useable.
{"title":"Bridging the Divide: Exploring the use of digital and physical technology to aid mobility impaired people living in an informal settlement","authors":"G. Barbareschi, B. Oldfrey, Long Xin, G. Magomere, Wycliffe Ambeyi Wetende, Carol Wanjira, J. Olenja, Victoria Austin, C. Holloway","doi":"10.1145/3373625.3417021","DOIUrl":"https://doi.org/10.1145/3373625.3417021","url":null,"abstract":"Living in informality is challenging. It is even harder when you have a mobility impairment. Traditional assistive products such as wheelchairs are essential to enable people to travel. Wheelchairs are considered a Human Right. However, they are difficult to access. On the other hand, mobile phones are becoming ubiquitous and are increasingly seen as an assistive technology. Should therefore a mobile phone be considered a Human Right? To help understand the role of the mobile phone in contrast of a more traditional assistive technology – the wheelchair, we conducted contextual interviews with eight mobility impaired people who live in Kibera, a large informal settlement in Nairobi. Our findings show mobile phones act as an accessibility bridge when physical accessibility becomes too challenging. We explore our findings from two perspective – human infrastructure and interdependence, contributing an understanding of the role supported interactions play in enabling both the wheelchair and the mobile phone to be used. This further demonstrates the critical nature of designing for context and understanding the social fabric that characterizes informal settlements. It is this social fabric which enables the technology to be useable.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many Deaf and Hard-of-Hearing (DHH) individuals rely on sign language interpreting to communicate with hearing peers. If on-site interpreting is not available, DHH individuals may use remote interpreting over a smartphone video-call. However, this solution requires the DHH individual to give up either 1) the use of one signing hand by holding the smartphone or 2) their ability to multitask and move around by propping the smartphone up in a fixed location. We explore this problem within the context of the workplace, and present a prototype hands-free device using augmented reality glasses with a hat-mounted fisheye camera and mic/speaker. To explore the validity of our design, we conducted 1) a video interpretability experiment, and 2) a user study with 18 participants (9 DHH, 9 hearing) in a workplace environment. Our results suggest that a hands-free device can support accurate interpretation while enhancing personal interactions.
{"title":"Chat in the Hat: A Portable Interpreter for Sign Language Users","authors":"Larwan Berke, W. Thies, Danielle Bragg","doi":"10.1145/3373625.3417026","DOIUrl":"https://doi.org/10.1145/3373625.3417026","url":null,"abstract":"Many Deaf and Hard-of-Hearing (DHH) individuals rely on sign language interpreting to communicate with hearing peers. If on-site interpreting is not available, DHH individuals may use remote interpreting over a smartphone video-call. However, this solution requires the DHH individual to give up either 1) the use of one signing hand by holding the smartphone or 2) their ability to multitask and move around by propping the smartphone up in a fixed location. We explore this problem within the context of the workplace, and present a prototype hands-free device using augmented reality glasses with a hat-mounted fisheye camera and mic/speaker. To explore the validity of our design, we conducted 1) a video interpretability experiment, and 2) a user study with 18 participants (9 DHH, 9 hearing) in a workplace environment. Our results suggest that a hands-free device can support accurate interpretation while enhancing personal interactions.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maryam Bandukda, C. Holloway, Aneesha Singh, N. Bianchi-Berthouze
Interacting with natural environments such as parks and the countryside improves health and wellbeing. These spaces allow for exercise, relaxation, socialising and exploring nature, however, they are often not used by blind and partially sighted people (BPSP). To better understand the needs of BPSP for outdoor leisure experience and barriers encountered in planning, accessing and engaging with natural environments, we conducted an exploratory qualitative online survey (22 BPSP), semi-structured interviews (20 BPSP) and a focus group (9 BPSP; 1 support worker). We also explored how current technologies support park experiences for BPSP. Our findings identify common barriers across the stages of planning (e.g. limited accessible information about parks), accessing (e.g. poor wayfinding systems), engaging with and sharing leisure experiences. Across all stages (PLan, Access, Engage, Share) we found a common theme of Contribute. BPSP wished to co-plan their trip, contribute to ways of helping others access a place, develop multisensory approaches to engaging in their surroundings and share their experiences to help others. In this paper, we present the initial work supporting the development of a framework for understanding the leisure experiences of BPSP. We explore this theme of contribution and propose a framework where this feeds into each of the stages of leisure experience, resulting in the proposed, PLACES framework (PLan, Access, Contribute, Engage, Share), which aims to provide a foundation for future research on accessibility and outdoor leisure experiences for BPSP and people with disabilities.
{"title":"PLACES: A Framework for Supporting Blind and Partially Sighted People in Outdoor Leisure Activities","authors":"Maryam Bandukda, C. Holloway, Aneesha Singh, N. Bianchi-Berthouze","doi":"10.1145/3373625.3417001","DOIUrl":"https://doi.org/10.1145/3373625.3417001","url":null,"abstract":"Interacting with natural environments such as parks and the countryside improves health and wellbeing. These spaces allow for exercise, relaxation, socialising and exploring nature, however, they are often not used by blind and partially sighted people (BPSP). To better understand the needs of BPSP for outdoor leisure experience and barriers encountered in planning, accessing and engaging with natural environments, we conducted an exploratory qualitative online survey (22 BPSP), semi-structured interviews (20 BPSP) and a focus group (9 BPSP; 1 support worker). We also explored how current technologies support park experiences for BPSP. Our findings identify common barriers across the stages of planning (e.g. limited accessible information about parks), accessing (e.g. poor wayfinding systems), engaging with and sharing leisure experiences. Across all stages (PLan, Access, Engage, Share) we found a common theme of Contribute. BPSP wished to co-plan their trip, contribute to ways of helping others access a place, develop multisensory approaches to engaging in their surroundings and share their experiences to help others. In this paper, we present the initial work supporting the development of a framework for understanding the leisure experiences of BPSP. We explore this theme of contribution and propose a framework where this feeds into each of the stages of leisure experience, resulting in the proposed, PLACES framework (PLan, Access, Contribute, Engage, Share), which aims to provide a foundation for future research on accessibility and outdoor leisure experiences for BPSP and people with disabilities.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The usage of playful activities is common in introductory programming settings. There is normally a virtual character or a physical robot that has to collect items or reach a goal within a map. Visually, these activities tend to be exciting enough to maintain children engaged: there is constant feedback about the actions being performed, and the virtual environments tend to be stimulating and aesthetically pleasant. Conversely, in adaptations for visually impaired children, these environments tend to become poorer, damaging the story at the cost of the programming actions and its dull mechanics (e.g., place a arrow block to move the character forward). In this paper, we present TACTOPI, a playful environment designed from the ground up to be rich in both its story (a nautical game) and its mechanics (e.g., a physical robot-boat controlled with a 3D printed wheel), tailored to promote computational thinking at different levels (4 to 8 years old). This poster intends to provoke discussion and motivate accessibility researchers that are interested in computational thinking to make playfulness a priority.
{"title":"TACTOPI: a Playful Approach to Promote Computational Thinking for Visually Impaired Children","authors":"L. Abreu, A. Pires, Tiago Guerreiro","doi":"10.1145/3373625.3418003","DOIUrl":"https://doi.org/10.1145/3373625.3418003","url":null,"abstract":"The usage of playful activities is common in introductory programming settings. There is normally a virtual character or a physical robot that has to collect items or reach a goal within a map. Visually, these activities tend to be exciting enough to maintain children engaged: there is constant feedback about the actions being performed, and the virtual environments tend to be stimulating and aesthetically pleasant. Conversely, in adaptations for visually impaired children, these environments tend to become poorer, damaging the story at the cost of the programming actions and its dull mechanics (e.g., place a arrow block to move the character forward). In this paper, we present TACTOPI, a playful environment designed from the ground up to be rich in both its story (a nautical game) and its mechanics (e.g., a physical robot-boat controlled with a 3D printed wheel), tailored to promote computational thinking at different levels (4 to 8 years old). This poster intends to provoke discussion and motivate accessibility researchers that are interested in computational thinking to make playfulness a priority.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123385239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Difficulties in understanding texts that contain unusual words can create accessibility barriers for people with cognitive, language and learning disabilities. In this work, the EASIER system, a web system that provides various tools which improve cognitive accessibility, is presented. From a text in Spanish, complex words are detected and synonyms, a definition and a pictogram are offered for each complex word detected. Language and accessibility resources were used, such as easy-to-read dictionaries. The web system can be accessed from both desktop computers and mobile devices. Moreover, a browser extension is also offered.
{"title":"EASIER system. Language resources for cognitive accessibility.","authors":"Lourdes Moreno, Rodrigo Alarcón, Paloma Martínez","doi":"10.1145/3373625.3418006","DOIUrl":"https://doi.org/10.1145/3373625.3418006","url":null,"abstract":"Difficulties in understanding texts that contain unusual words can create accessibility barriers for people with cognitive, language and learning disabilities. In this work, the EASIER system, a web system that provides various tools which improve cognitive accessibility, is presented. From a text in Spanish, complex words are detected and synonyms, a definition and a pictogram are offered for each complex word detected. Language and accessibility resources were used, such as easy-to-read dictionaries. The web system can be accessed from both desktop computers and mobile devices. Moreover, a browser extension is also offered.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114230685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A lot of experiences that come from visual information are often not fully accessible by people with visual impairments (VIPs). Details like subtle facial expressions, physical appearance and atmosphere in the space are not easily perceived by VIPs. Our goal is to design a technological solution to communicate visual details haptically. In order to better understand how visual details are perceived and interpreted by VIPs, we conducted semi-structured interviews with six blind and low vision visual artists. Our interviews focused on understanding how visual information is perceived and reflected in their artwork. We identified four themes that described the participants’ visual experiences in relation to (1) Perception of Physical Attributes, (2) Interactions with Others, (3) Identifying Challenging Environments, (4) Strategies and Challenges of Perceiving the Surroundings. Our findings from this preliminary study will guide the design of a haptic solution.
{"title":"Deconstructing a “puzzle” of visual experiences of blind and low-vision visual artists.","authors":"Yulia Zhiglova","doi":"10.1145/3373625.3417080","DOIUrl":"https://doi.org/10.1145/3373625.3417080","url":null,"abstract":"A lot of experiences that come from visual information are often not fully accessible by people with visual impairments (VIPs). Details like subtle facial expressions, physical appearance and atmosphere in the space are not easily perceived by VIPs. Our goal is to design a technological solution to communicate visual details haptically. In order to better understand how visual details are perceived and interpreted by VIPs, we conducted semi-structured interviews with six blind and low vision visual artists. Our interviews focused on understanding how visual information is perceived and reflected in their artwork. We identified four themes that described the participants’ visual experiences in relation to (1) Perception of Physical Attributes, (2) Interactions with Others, (3) Identifying Challenging Environments, (4) Strategies and Challenges of Perceiving the Surroundings. Our findings from this preliminary study will guide the design of a haptic solution.","PeriodicalId":433618,"journal":{"name":"Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115842287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}