Michele C McDonnall, Rachael Sessler-Trinkowsky, Anne Steverson
Interest in the benefits of braille for people who are blind is high among professionals in the blindness field, but we know little about how braille is used in the workplace. The broad purpose of this study was to learn how employed people who are blind use braille on the job. Specific topics investigated included: work tasks refreshable braille technology (RBT) is used for, personal and job characteristics of RBT users compared to non-users, and factors associated with RBT use among workers with at least moderate braille skills. This study utilized data from 304 participants in a longitudinal research project investigating assistive technology use in the workplace by people who are blind. Two-thirds of our participants used braille on the job, and more than half utilized RBT. Workers who used RBT did not necessarily use it for all computer-related tasks they performed. RBT use was generally not significantly related to job characteristics, except for working for a blindness organization. RBT use was not significantly related to general personal characteristics but it was significantly different based on disability-related characteristics. Only older age and higher braille skills were significantly associated with RBT use on the job in a multivariate logistic regression model.
{"title":"Use of Braille in the Workplace by People Who Are Blind.","authors":"Michele C McDonnall, Rachael Sessler-Trinkowsky, Anne Steverson","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Interest in the benefits of braille for people who are blind is high among professionals in the blindness field, but we know little about how braille is used in the workplace. The broad purpose of this study was to learn how employed people who are blind use braille on the job. Specific topics investigated included: work tasks refreshable braille technology (RBT) is used for, personal and job characteristics of RBT users compared to non-users, and factors associated with RBT use among workers with at least moderate braille skills. This study utilized data from 304 participants in a longitudinal research project investigating assistive technology use in the workplace by people who are blind. Two-thirds of our participants used braille on the job, and more than half utilized RBT. Workers who used RBT did not necessarily use it for all computer-related tasks they performed. RBT use was generally not significantly related to job characteristics, except for working for a blindness organization. RBT use was not significantly related to general personal characteristics but it was significantly different based on disability-related characteristics. Only older age and higher braille skills were significantly associated with RBT use on the job in a multivariate logistic regression model.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"12 ","pages":"58-75"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11404553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charity Pitcher-Cooper, Manali Seth, Benjamin Kao, James M Coughlan, Ilmi Yoon
The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.
{"title":"You Described, We Archived: A Rich Audio Description Dataset.","authors":"Charity Pitcher-Cooper, Manali Seth, Benjamin Kao, James M Coughlan, Ilmi Yoon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The You Described, We Archived dataset (YuWA) is a collaboration between San Francisco State University and The Smith-Kettlewell Eye Research Institute. It includes audio description (AD) data collected worldwide 2013-2022 through YouDescribe, an accessibility tool for adding audio descriptions to YouTube videos. YouDescribe, a web-based audio description tool along with an iOS viewing app, has a community of 12,000+ average annual visitors, with approximately 3,000 volunteer describers, and has created over 5,500 audio described YouTube videos. Blind and visually impaired (BVI) viewers request videos, which then are saved to a wish list and volunteer audio describers select a video, write a script, record audio clips, and edit clip placement to create an audio description. The AD tracks are stored separately, posted for public view at https://youdescribe.org/ and played together with the YouTube video. The YuWA audio description data paired with the describer and viewer metadata, and collection timeline has a large number of research applications including artificial intelligence, machine learning, sociolinguistics, audio description, video understanding, video retrieval and video-language grounding tasks.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"11 ","pages":"192-208"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10956524/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140186480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.
{"title":"VR Training to Facilitate Blind Photography for Navigation.","authors":"Jonggi Hong, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Smartphone-based navigation apps allow blind and visually impaired (BVI) people to take images or videos to complete various tasks such as determining a user 's location, recognizing objects, and detecting obstacles. The quality of the images and videos significantly affects the performance of these systems, but manipulating a camera to capture clear images with proper framing is a challenging task for BVI users. This research explores the interactions between a camera and BVI users in assistive navigation systems through interviews with BVI participants. We identified the form factors, applications, and challenges in using camera-based navigation systems and designed an interactive training app to improve BVI users' skills in using a camera for navigation. In this paper, we describe a novel virtual environment of the training app and report the preliminary results of a user study with BVI participants.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"11 ","pages":"245-259"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10962001/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140289848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brandon Biggs, Charity Pitcher-Cooper, James M Coughlan
This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.
{"title":"Getting in Touch With Tactile Map Automated Production: Evaluating impact and areas for improvement.","authors":"Brandon Biggs, Charity Pitcher-Cooper, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study evaluated the impact the Tactile Maps Automated Production (TMAP) system has had on its blind and visually impaired (BVI) and Orientation and Mobility (O&M) users and obtained suggestions for improvement. A semi-structured interview was performed with six BVI and seven O&M TMAP users who had printed or ordered two or more TMAPs in the last year. The number of maps downloaded from the online TMAP generation platform was also reviewed for each participant. The most significant finding is that having access to TMAPs increased map usage for BVIs from less than 1 map a year to getting at least two maps from the order system, with those who had easy access to an embosser generating on average 18.33 TMAPs from the online system and saying they embossed 42 maps on average at home or work. O&Ms appreciated the quick, high-quality, and scaled map they could create and send home with their students, and they frequently used TMAPs with their braille reading students. To improve TMAPs, users requested that the following features be added: interactivity, greater customizability of TMAPs, viewing of transit stops, lower cost of the ordered TMAP, and nonvisual viewing of the digital TMAP on the online platform.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"10 ","pages":"135-153"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10065749/pdf/nihms-1835895.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9636841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyed Ali Cheraghi, Giovanni Fusco, James M Coughlan
Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.
{"title":"Real-Time Sign Detection for Accessible Indoor Navigation.","authors":"Seyed Ali Cheraghi, Giovanni Fusco, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Indoor navigation is a major challenge for people with visual impairments, who often lack access to visual cues such as informational signs, landmarks and structural features that people with normal vision rely on for wayfinding. We describe a new approach to recognizing and analyzing informational signs, such as Exit and restroom signs, in a building. This approach will be incorporated in iNavigate, a smartphone app we are developing, that provides accessible indoor navigation assistance. The app combines a digital map of the environment with computer vision and inertial sensing to estimate the user's location on the map in real time. Our new approach can recognize and analyze any sign from a small number of training images, and multiple types of signs can be processed simultaneously in each video frame. Moreover, in addition to estimating the distance to each detected sign, we can also estimate the approximate sign orientation (indicating if the sign is viewed head-on or obliquely), which improves the localization performance in challenging conditions. We evaluate the performance of our approach on four sign types distributed among multiple floors of an office building.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"9 ","pages":"125-139"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8331194/pdf/nihms-1725000.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a hotspot (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.
{"title":"Towards Accessible Audio Labeling of 3D Objects.","authors":"James M Coughlan, Huiying Shen, Brandon Biggs","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We describe a new approach to audio labeling of 3D objects such as appliances, 3D models and maps that enables a visually impaired person to audio label objects. Our approach to audio labeling is called CamIO, a smartphone app that issues audio labels when the user points to a <i>hotspot</i> (a location of interest on an object) with a handheld stylus viewed by the smartphone camera. The CamIO app allows a user to create a new hotspot location by pointing at the location with a second stylus and recording a personalized audio label for the hotspot. In contrast with other audio labeling approaches that require the object of interest to be constructed of special materials, 3D printed, or equipped with special sensors, CamIO works with virtually any rigid object and requires only a smartphone, a paper barcode pattern mounted to the object of interest, and two inexpensive styluses. Moreover, our approach allows a visually impaired user to create audio labels independently. We describe a co-design performed with six blind participants exploring how they label objects in their daily lives and a study with the participants demonstrating the feasibility of CamIO for providing accessible audio labeling.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"8 ","pages":"210-222"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7425180/pdf/nihms-1611173.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38279362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.
{"title":"S-K Smartphone Barcode Reader for the Blind.","authors":"Ender Tekin, David Vásquez, James M Coughlan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We describe a new smartphone app called BLaDE (Barcode Localization and Decoding Engine), designed to enable a blind or visually impaired user find and read product barcodes. Developed at The Smith-Kettlewell Eye Research Institute, the BLaDE Android app has been released as open source software, which can be used for free or modified for commercial or non-commercial use. Unlike popular commercial smartphone apps, BLaDE provides real-time audio feedback to help visually impaired users locate a barcode, which is a prerequisite to being able to read it. We describe experiments performed with five blind/visually impaired volunteer participants demonstrating that BLaDE is usable and that the audio feedback is key to its usability.</p>","PeriodicalId":74025,"journal":{"name":"Journal on technology and persons with disabilities : ... Annual International Technology and Persons with Disabilities Conference","volume":"28 ","pages":"230-239"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4288446/pdf/nihms626930.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32986799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}