We present a study with 20 participants with low vision who operated two types of screen magnification (lens and full) on a laptop computer to read two types of document (text and web page). Our purposes were to comparatively assess the two magnification modalities, and to obtain some insight into how people with low vision use the mouse to control the center of magnification. These observations may inform the design of systems for the automatic control of the center of magnification. Our results show that there were no significant differences in reading performances or in subjective preferences between the two magnification modes. However, when using the lens mode, our participants adopted more consistent and uniform mouse motion patterns, while longer and more frequent pauses and shorter overall path lengths were measured using the full mode. Analysis of the distribution of gaze points (as measured by a gaze tracker) using the full mode shows that, when reading a text document, most participants preferred to move the area of interest to a specific region of the screen.
{"title":"Screen Magnification for Readers with Low Vision: A Study on Usability and Performance.","authors":"Meini Tang, Roberto Manduchi, Susana Chung, Raquel Prado","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a study with 20 participants with low vision who operated two types of screen magnification (lens and full) on a laptop computer to read two types of document (text and web page). Our purposes were to comparatively assess the two magnification modalities, and to obtain some insight into how people with low vision use the mouse to control the center of magnification. These observations may inform the design of systems for the automatic control of the center of magnification. Our results show that there were no significant differences in reading performances or in subjective preferences between the two magnification modes. However, when using the lens mode, our participants adopted more consistent and uniform mouse motion patterns, while longer and more frequent pauses and shorter overall path lengths were measured using the full mode. Analysis of the distribution of gaze points (as measured by a gaze tracker) using the full mode shows that, when reading a text document, most participants preferred to move the area of interest to a specific region of the screen.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923554/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.
{"title":"Blind Users Accessing Their Training Images in Teachable Object Recognizers.","authors":"Jonggi Hong, Jaina Gandhi, Ernest Essuah Mensah, Farnaz Zamiri Zeraati, Ebrima Haddy Jarjue, Kyungjun Lee, Hernisa Kacorri","doi":"10.1145/3517428.3544824","DOIUrl":"10.1145/3517428.3544824","url":null,"abstract":"<p><p>Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (<i>N</i> = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10008526/pdf/nihms-1869981.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9111608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emma Dixon, Rain Michaels, Xiang Xiao, Yu Zhong, Patrick Clary, Ajit Narayanan, Robin Brewer, Amanda Lazar
With the rising usage of mobile phones by people with mild dementia, and the documented barriers to technology use that exist for people with dementia, there is an open opportunity to study the specifics of mobile phone use by people with dementia. In this work we provide a first step towards filling this gap through an interview study with fourteen people with mild to moderate dementia. Our analysis yields insights into mobile phone use by people with mild to moderate dementia, challenges they experience with mobile phone use, and their ideas to address these challenges. Based on these findings, we discuss design opportunities to help achieve more accessible and supportive technology use for people with dementia. Our work opens up new opportunities for the design of systems focused on augmenting and enhancing the abilities of people with dementia.
{"title":"Mobile Phone Use by People with Mild to Moderate Dementia: Uncovering Challenges and Identifying Opportunities: Mobile Phone Use by People with Mild to Moderate Dementia.","authors":"Emma Dixon, Rain Michaels, Xiang Xiao, Yu Zhong, Patrick Clary, Ajit Narayanan, Robin Brewer, Amanda Lazar","doi":"10.1145/3517428.3544809","DOIUrl":"https://doi.org/10.1145/3517428.3544809","url":null,"abstract":"<p><p>With the rising usage of mobile phones by people with mild dementia, and the documented barriers to technology use that exist for people with dementia, there is an open opportunity to study the specifics of mobile phone use by people with dementia. In this work we provide a first step towards filling this gap through an interview study with fourteen people with mild to moderate dementia. Our analysis yields insights into mobile phone use by people with mild to moderate dementia, challenges they experience with mobile phone use, and their ideas to address these challenges. Based on these findings, we discuss design opportunities to help achieve more accessible and supportive technology use for people with dementia. Our work opens up new opportunities for the design of systems focused on augmenting and enhancing the abilities of people with dementia.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10202486/pdf/nihms-1865459.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9582599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups. In this paper, we contribute an analysis of the representativeness of age, gender, and race & ethnicity in accessibility datasets-datasets sourced from people with disabilities and older adults-that can potentially play an important role in mitigating bias for inclusive AI-infused applications. We examine the current state of representation within datasets sourced by people with disabilities by reviewing publicly-available information of 190 datasets, we call these accessibility datasets. We find that accessibility datasets represent diverse ages, but have gender and race representation gaps. Additionally, we investigate how the sensitive and complex nature of demographic variables makes classification difficult and inconsistent (e.g., gender, race & ethnicity), with the source of labeling often unknown. By reflecting on the current challenges and opportunities for representation of disabled data contributors, we hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.
{"title":"Data Representativeness in Accessibility Datasets: A Meta-Analysis.","authors":"Rie Kamikubo, Lining Wang, Crystal Marte, Amnah Mahmood, Hernisa Kacorri","doi":"10.1145/3517428.3544826","DOIUrl":"https://doi.org/10.1145/3517428.3544826","url":null,"abstract":"<p><p>As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups. In this paper, we contribute an analysis of the representativeness of age, gender, and race & ethnicity in accessibility datasets-datasets sourced from people with disabilities and older adults-that can potentially play an important role in mitigating bias for inclusive AI-infused applications. We examine the current state of representation within datasets sourced by people with disabilities by reviewing publicly-available information of 190 datasets, we call these accessibility datasets. We find that accessibility datasets represent diverse ages, but have gender and race representation gaps. Additionally, we investigate how the sensitive and complex nature of demographic variables makes classification difficult and inconsistent (<i>e.g.</i>, gender, race & ethnicity), with the source of labeling often unknown. By reflecting on the current challenges and opportunities for representation of disabled data contributors, we hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024595/pdf/nihms-1869788.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9153813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer access is increasingly critical for all aspects of life from education to employment to daily living, health and almost all types of participation. The pandemic has highlighted our dependence on technology, but the dependence existed before and is continuing after. Yet many face barriers due to disability, literacy, or digital literacy. Although the problems faced by individuals with disabilities have received focus for some time, the problems faced by people who just have difficulty in using technologies has not, but is a second large, yet less understood problem. Solutions exist but are often not installed, buried, hard to find, and difficult to understand and use. To address these problems, an open-source extension to the Windows and macOS operating systems has been under exploration and development by an international consortium of organizations, companies, and individuals. It combines auto-personalization, layering, and enhanced discovery, with the ability to Install on Demand (IoD) any assistive technologies a user needs. The software, called Morphic, is now installed on all of the computers across campus at several major universities and libraries in the US and Canada. It makes computers simpler to use, and allows whichever features or assistive technologies a person needs to appear on any computer they encounter (that has Morphic on it) and want to use at school, work, library, community center, etc. This demonstration will cover both the basic and advanced features as well as how to get free copies of the open-source software and configure it for school, work or personal use. It will also highlight lessons learned from the placements.
{"title":"An Open-source Tool for Simplifying Computer and Assistive Technology Use: Tool for simplification and auto-personalization of computers and assistive technologies.","authors":"Gregg C Vanderheiden, J Bern Jordan","doi":"10.1145/3441852.3476554","DOIUrl":"https://doi.org/10.1145/3441852.3476554","url":null,"abstract":"Computer access is increasingly critical for all aspects of life from education to employment to daily living, health and almost all types of participation. The pandemic has highlighted our dependence on technology, but the dependence existed before and is continuing after. Yet many face barriers due to disability, literacy, or digital literacy. Although the problems faced by individuals with disabilities have received focus for some time, the problems faced by people who just have difficulty in using technologies has not, but is a second large, yet less understood problem. Solutions exist but are often not installed, buried, hard to find, and difficult to understand and use. To address these problems, an open-source extension to the Windows and macOS operating systems has been under exploration and development by an international consortium of organizations, companies, and individuals. It combines auto-personalization, layering, and enhanced discovery, with the ability to Install on Demand (IoD) any assistive technologies a user needs. The software, called Morphic, is now installed on all of the computers across campus at several major universities and libraries in the US and Canada. It makes computers simpler to use, and allows whichever features or assistive technologies a person needs to appear on any computer they encounter (that has Morphic on it) and want to use at school, work, library, community center, etc. This demonstration will cover both the basic and advanced features as well as how to get free copies of the open-source software and configure it for school, work or personal use. It will also highlight lessons learned from the placements.","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8620129/pdf/nihms-1752258.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39942022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby for blind people. Analyzing data collected in a study with blind (N=10) and sighted (N=40) participants, we explore: (i) visual information on approaching passersby captured by a head-worn camera; (ii) pedestrian detection algorithms for extracting proxemic signals such as passerby presence, relative position, distance, and head pose; and (iii) opportunities and limitations of using wearable cameras for helping blind people access proxemics related to nearby people. Our observations and findings provide insights into dyadic behaviors for assistive pedestrian detection and lead to implications for the design of future head-worn cameras and interactions.
{"title":"Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind.","authors":"Kyungjun Lee, Daisuke Sato, Saki Asakawa, Chieko Asakawa, Hernisa Kacorri","doi":"10.1145/3441852.3471232","DOIUrl":"https://doi.org/10.1145/3441852.3471232","url":null,"abstract":"<p><p>The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby for blind people. Analyzing data collected in a study with blind (N=10) and sighted (N=40) participants, we explore: (i) visual information on approaching passersby captured by a head-worn camera; (ii) pedestrian detection algorithms for extracting proxemic signals such as passerby presence, relative position, distance, and head pose; and (iii) opportunities and limitations of using wearable cameras for helping blind people access proxemics related to nearby people. Our observations and findings provide insights into dyadic behaviors for assistive pedestrian detection and lead to implications for the design of future head-worn cameras and interactions.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855357/pdf/nihms-1752252.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years. Our analysis highlights how researchers navigate tensions between benefits and risks in data collection and sharing. We uncover patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus. We conclude by critically reflecting on challenges and opportunities related to locating and sharing accessibility datasets calling for technical, legal, and institutional privacy frameworks that are more attuned to concerns from these communities.
{"title":"Sharing Practices for Datasets Related to Accessibility and Aging.","authors":"Rie Kamikubo, Utkarsh Dwivedi, Hernisa Kacorri","doi":"10.1145/3441852.3471208","DOIUrl":"10.1145/3441852.3471208","url":null,"abstract":"<p><p>Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years. Our analysis highlights how researchers navigate tensions between benefits and risks in data collection and sharing. We uncover patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus. We conclude by critically reflecting on challenges and opportunities related to locating and sharing accessibility datasets calling for technical, legal, and institutional privacy frameworks that are more attuned to concerns from these communities.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855358/pdf/nihms-1752251.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara
Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) Quality; commenting on different AD quality variables, (ii) Speech Act; the utterance or speech action that the reviewers used, (iii) Required Action; the recommended action that the authors should do to improve the AD, and (iv) Guidance; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.
{"title":"Uncovering Patterns in Reviewers' Feedback to Scene Description Authors.","authors":"Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3441852.3476550","DOIUrl":"10.1145/3441852.3476550","url":null,"abstract":"<p><p>Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) <b>Quality</b>; commenting on different AD quality variables, (ii) <b>Speech Act</b>; the utterance or speech action that the reviewers used, (iii) <b>Required Action</b>; the recommended action that the authors should do to improve the AD, and (iv) <b>Guidance</b>; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855355/pdf/nihms-1752255.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosiana Natalie, Joshua Tseng, Jolene Loh, Ian Luke Yi-Ren Chan, Huei Suen Tan, Ebrima H Jarjue, Hernisa Kacorri, Kotaro Hara
The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more cost-effective audio descriptions that are also of high quality by involving novices. Specifically, we designed, developed, and evaluated ViScene, a web-based collaborative audio description authoring tool that enables a sighted novice author and a reviewer either sighted or blind to interact and contribute to scene descriptions (SDs)-text that can be transformed into audio through text-to-speech. Through a mixed-design study with N = 60 participants, we assessed the quality of SDs created by sighted novices with feedback from both sighted and blind reviewers. Our results showed that with ViScene novices could produce content that is Descriptive, Objective, Referable, and Clear at a cost of i.e., US$2.81pvm to US$5.48pvm, which is 54% to 96% lower than the professional service. However, the descriptions lacked in other quality dimensions (e.g., learning, a measure of how well an SD conveys the video's intended message). While professional audio describers remain the gold standard, for content creators who cannot afford it, ViScene offers a cost-effective alternative, ultimately leading to a more accessible medium.
{"title":"The Efficacy of Collaborative Authoring of Video Scene Descriptions.","authors":"Rosiana Natalie, Joshua Tseng, Jolene Loh, Ian Luke Yi-Ren Chan, Huei Suen Tan, Ebrima H Jarjue, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3441852.3471201","DOIUrl":"https://doi.org/10.1145/3441852.3471201","url":null,"abstract":"<p><p>The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more cost-effective audio descriptions that are also of high quality by involving novices. Specifically, we designed, developed, and evaluated ViScene, a web-based collaborative audio description authoring tool that enables a sighted novice author and a reviewer either sighted or blind to interact and contribute to scene descriptions (SDs)-text that can be transformed into audio through text-to-speech. Through a mixed-design study with <i>N</i> = 60 participants, we assessed the quality of SDs created by sighted novices with feedback from both sighted and blind reviewers. Our results showed that with ViScene novices could produce content that is Descriptive, Objective, Referable, and Clear at a cost of <i>i.e.,</i> US$2.81pvm to US$5.48pvm, which is 54% to 96% lower than the professional service. However, the descriptions lacked in other quality dimensions (<i>e.g.,</i> learning, a measure of how well an SD conveys the video's intended message). While professional audio describers remain the gold standard, for content creators who cannot afford it, ViScene offers a cost-effective alternative, ultimately leading to a more accessible medium.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855356/pdf/nihms-1752253.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.
{"title":"TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users.","authors":"Hae-Na Lee, Sami Uddin, Vikas Ashok","doi":"10.1145/3373625.3417030","DOIUrl":"https://doi.org/10.1145/3373625.3417030","url":null,"abstract":"<p><p>People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3417030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25455684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}