首页 > 最新文献

ASSETS. Annual ACM Conference on Assistive Technologies最新文献

英文 中文
Screen Magnification for Readers with Low Vision: A Study on Usability and Performance. 低视力读者的屏幕放大功能:可用性和性能研究
Pub Date : 2023-10-01 Epub Date: 2023-10-22
Meini Tang, Roberto Manduchi, Susana Chung, Raquel Prado

We present a study with 20 participants with low vision who operated two types of screen magnification (lens and full) on a laptop computer to read two types of document (text and web page). Our purposes were to comparatively assess the two magnification modalities, and to obtain some insight into how people with low vision use the mouse to control the center of magnification. These observations may inform the design of systems for the automatic control of the center of magnification. Our results show that there were no significant differences in reading performances or in subjective preferences between the two magnification modes. However, when using the lens mode, our participants adopted more consistent and uniform mouse motion patterns, while longer and more frequent pauses and shorter overall path lengths were measured using the full mode. Analysis of the distribution of gaze points (as measured by a gaze tracker) using the full mode shows that, when reading a text document, most participants preferred to move the area of interest to a specific region of the screen.

我们对 20 名低视能参与者进行了一项研究,他们在笔记本电脑上操作两种屏幕放大方式(镜头放大和完全放大)来阅读两种类型的文档(文本和网页)。我们的目的是对两种放大模式进行比较评估,并深入了解低视力者如何使用鼠标控制放大中心。这些观察结果可为设计自动控制放大中心的系统提供参考。我们的研究结果表明,两种放大模式在阅读效果和主观偏好上没有明显差异。不过,在使用透镜模式时,我们的参与者采用了更一致、更均匀的鼠标运动模式,而在使用完整模式时,他们的停顿时间更长、更频繁,总体路径长度也更短。对使用全屏模式时的注视点分布(由注视跟踪器测量)进行的分析表明,在阅读文本文档时,大多数参与者更倾向于将感兴趣的区域移动到屏幕的特定区域。
{"title":"Screen Magnification for Readers with Low Vision: A Study on Usability and Performance.","authors":"Meini Tang, Roberto Manduchi, Susana Chung, Raquel Prado","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a study with 20 participants with low vision who operated two types of screen magnification (lens and full) on a laptop computer to read two types of document (text and web page). Our purposes were to comparatively assess the two magnification modalities, and to obtain some insight into how people with low vision use the mouse to control the center of magnification. These observations may inform the design of systems for the automatic control of the center of magnification. Our results show that there were no significant differences in reading performances or in subjective preferences between the two magnification modes. However, when using the lens mode, our participants adopted more consistent and uniform mouse motion patterns, while longer and more frequent pauses and shorter overall path lengths were measured using the full mode. Analysis of the distribution of gaze points (as measured by a gaze tracker) using the full mode shows that, when reading a text document, most participants preferred to move the area of interest to a specific region of the screen.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10923554/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind Users Accessing Their Training Images in Teachable Object Recognizers. 盲人用户在可教物体识别器中访问自己的训练图像
Pub Date : 2022-10-01 Epub Date: 2022-10-22 DOI: 10.1145/3517428.3544824
Jonggi Hong, Jaina Gandhi, Ernest Essuah Mensah, Farnaz Zamiri Zeraati, Ebrima Haddy Jarjue, Kyungjun Lee, Hernisa Kacorri

Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.

可教物体识别器为盲人的一个非常实际的需求提供了解决方案--实例级物体识别。它们假定人们可以目测它们提供的用于训练的照片,而对于盲人来说,这是一个关键且难以接近的步骤。在这项工作中,我们设计了数据描述符来应对这一挑战。它们能实时显示照片中的物体是否被裁剪或太小、是否包含一只手、照片是否模糊以及照片之间的差异程度。我们的描述符内置于名为 MYCam 的开源测试平台 iOS 应用程序中。在盲人参与者家中进行的远程用户研究(N = 12)中,我们展示了描述符(即使容易出错)是如何支持实验并对训练集的质量产生积极影响的,这种影响可以转化为模型性能,尽管这种增益并不一致。参与者认为该应用简单易用,表明他们可以有效地对其进行训练,而且描述符也很有用。不过,许多人认为训练很枯燥乏味,因此开始讨论在信息、时间和认知负荷之间保持平衡的必要性。
{"title":"Blind Users Accessing Their Training Images in Teachable Object Recognizers.","authors":"Jonggi Hong, Jaina Gandhi, Ernest Essuah Mensah, Farnaz Zamiri Zeraati, Ebrima Haddy Jarjue, Kyungjun Lee, Hernisa Kacorri","doi":"10.1145/3517428.3544824","DOIUrl":"10.1145/3517428.3544824","url":null,"abstract":"<p><p>Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (<i>N</i> = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10008526/pdf/nihms-1869981.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9111608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile Phone Use by People with Mild to Moderate Dementia: Uncovering Challenges and Identifying Opportunities: Mobile Phone Use by People with Mild to Moderate Dementia. 轻至中度痴呆患者的手机使用:发现挑战和识别机会:轻至中度痴呆患者的手机使用。
Pub Date : 2022-10-01 DOI: 10.1145/3517428.3544809
Emma Dixon, Rain Michaels, Xiang Xiao, Yu Zhong, Patrick Clary, Ajit Narayanan, Robin Brewer, Amanda Lazar

With the rising usage of mobile phones by people with mild dementia, and the documented barriers to technology use that exist for people with dementia, there is an open opportunity to study the specifics of mobile phone use by people with dementia. In this work we provide a first step towards filling this gap through an interview study with fourteen people with mild to moderate dementia. Our analysis yields insights into mobile phone use by people with mild to moderate dementia, challenges they experience with mobile phone use, and their ideas to address these challenges. Based on these findings, we discuss design opportunities to help achieve more accessible and supportive technology use for people with dementia. Our work opens up new opportunities for the design of systems focused on augmenting and enhancing the abilities of people with dementia.

随着轻度痴呆症患者越来越多地使用移动电话,以及痴呆症患者在使用技术方面存在的记录障碍,有机会研究痴呆症患者使用移动电话的具体情况。在这项工作中,我们通过对14名轻度至中度痴呆症患者的访谈研究,为填补这一空白提供了第一步。我们的分析深入了解了轻度至中度痴呆症患者的手机使用情况,他们在使用手机时遇到的挑战,以及他们应对这些挑战的想法。基于这些发现,我们讨论了设计机会,以帮助痴呆症患者实现更容易获得和支持性的技术使用。我们的工作为设计侧重于增强和提高痴呆症患者能力的系统开辟了新的机会。
{"title":"Mobile Phone Use by People with Mild to Moderate Dementia: Uncovering Challenges and Identifying Opportunities: Mobile Phone Use by People with Mild to Moderate Dementia.","authors":"Emma Dixon,&nbsp;Rain Michaels,&nbsp;Xiang Xiao,&nbsp;Yu Zhong,&nbsp;Patrick Clary,&nbsp;Ajit Narayanan,&nbsp;Robin Brewer,&nbsp;Amanda Lazar","doi":"10.1145/3517428.3544809","DOIUrl":"https://doi.org/10.1145/3517428.3544809","url":null,"abstract":"<p><p>With the rising usage of mobile phones by people with mild dementia, and the documented barriers to technology use that exist for people with dementia, there is an open opportunity to study the specifics of mobile phone use by people with dementia. In this work we provide a first step towards filling this gap through an interview study with fourteen people with mild to moderate dementia. Our analysis yields insights into mobile phone use by people with mild to moderate dementia, challenges they experience with mobile phone use, and their ideas to address these challenges. Based on these findings, we discuss design opportunities to help achieve more accessible and supportive technology use for people with dementia. Our work opens up new opportunities for the design of systems focused on augmenting and enhancing the abilities of people with dementia.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10202486/pdf/nihms-1865459.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9582599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Data Representativeness in Accessibility Datasets: A Meta-Analysis. 可访问性数据集的数据代表性:一个元分析。
Pub Date : 2022-10-01 DOI: 10.1145/3517428.3544826
Rie Kamikubo, Lining Wang, Crystal Marte, Amnah Mahmood, Hernisa Kacorri

As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups. In this paper, we contribute an analysis of the representativeness of age, gender, and race & ethnicity in accessibility datasets-datasets sourced from people with disabilities and older adults-that can potentially play an important role in mitigating bias for inclusive AI-infused applications. We examine the current state of representation within datasets sourced by people with disabilities by reviewing publicly-available information of 190 datasets, we call these accessibility datasets. We find that accessibility datasets represent diverse ages, but have gender and race representation gaps. Additionally, we investigate how the sensitive and complex nature of demographic variables makes classification difficult and inconsistent (e.g., gender, race & ethnicity), with the source of labeling often unknown. By reflecting on the current challenges and opportunities for representation of disabled data contributors, we hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.

随着数据驱动系统越来越多地大规模部署,对于在培训数据中代表性不足的历史边缘化群体,出现了不公平和歧视性结果的伦理担忧。作为回应,围绕人工智能公平性和包容性的工作需要能够代表不同人口群体的数据集。在本文中,我们分析了无障碍数据集中年龄、性别、种族和民族的代表性,这些数据集来自残疾人和老年人,这可能在减轻包容性人工智能应用的偏见方面发挥重要作用。我们通过审查190个数据集(我们称之为无障碍数据集)的公开信息,检查了残疾人数据集中的当前表现状态。我们发现可访问性数据集代表不同的年龄,但存在性别和种族代表性差距。此外,我们研究了人口统计变量的敏感性和复杂性如何使分类困难和不一致(例如,性别,种族和民族),标签的来源通常未知。通过反思当前残疾数据贡献者所面临的挑战和机遇,我们希望我们的努力能够扩大将边缘化社区更大程度地纳入人工智能系统的可能性空间。
{"title":"Data Representativeness in Accessibility Datasets: A Meta-Analysis.","authors":"Rie Kamikubo,&nbsp;Lining Wang,&nbsp;Crystal Marte,&nbsp;Amnah Mahmood,&nbsp;Hernisa Kacorri","doi":"10.1145/3517428.3544826","DOIUrl":"https://doi.org/10.1145/3517428.3544826","url":null,"abstract":"<p><p>As data-driven systems are increasingly deployed at scale, ethical concerns have arisen around unfair and discriminatory outcomes for historically marginalized groups that are underrepresented in training data. In response, work around AI fairness and inclusion has called for datasets that are representative of various demographic groups. In this paper, we contribute an analysis of the representativeness of age, gender, and race & ethnicity in accessibility datasets-datasets sourced from people with disabilities and older adults-that can potentially play an important role in mitigating bias for inclusive AI-infused applications. We examine the current state of representation within datasets sourced by people with disabilities by reviewing publicly-available information of 190 datasets, we call these accessibility datasets. We find that accessibility datasets represent diverse ages, but have gender and race representation gaps. Additionally, we investigate how the sensitive and complex nature of demographic variables makes classification difficult and inconsistent (<i>e.g.</i>, gender, race & ethnicity), with the source of labeling often unknown. By reflecting on the current challenges and opportunities for representation of disabled data contributors, we hope our effort expands the space of possibility for greater inclusion of marginalized communities in AI-infused systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024595/pdf/nihms-1869788.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9153813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An Open-source Tool for Simplifying Computer and Assistive Technology Use: Tool for simplification and auto-personalization of computers and assistive technologies. 简化计算机和辅助技术使用的开源工具:简化和自动个性化计算机和辅助技术的工具。
Pub Date : 2021-10-01 DOI: 10.1145/3441852.3476554
Gregg C Vanderheiden, J Bern Jordan
Computer access is increasingly critical for all aspects of life from education to employment to daily living, health and almost all types of participation. The pandemic has highlighted our dependence on technology, but the dependence existed before and is continuing after. Yet many face barriers due to disability, literacy, or digital literacy. Although the problems faced by individuals with disabilities have received focus for some time, the problems faced by people who just have difficulty in using technologies has not, but is a second large, yet less understood problem. Solutions exist but are often not installed, buried, hard to find, and difficult to understand and use. To address these problems, an open-source extension to the Windows and macOS operating systems has been under exploration and development by an international consortium of organizations, companies, and individuals. It combines auto-personalization, layering, and enhanced discovery, with the ability to Install on Demand (IoD) any assistive technologies a user needs. The software, called Morphic, is now installed on all of the computers across campus at several major universities and libraries in the US and Canada. It makes computers simpler to use, and allows whichever features or assistive technologies a person needs to appear on any computer they encounter (that has Morphic on it) and want to use at school, work, library, community center, etc. This demonstration will cover both the basic and advanced features as well as how to get free copies of the open-source software and configure it for school, work or personal use. It will also highlight lessons learned from the placements.
从教育、就业到日常生活、健康和几乎所有类型的参与,计算机接入对生活的各个方面都越来越重要。这场大流行病凸显了我们对技术的依赖,但这种依赖以前就存在,以后还在继续。然而,许多人由于残疾、读写能力或数字素养而面临障碍。虽然残疾人所面临的问题已经受到关注一段时间了,但那些在使用技术方面有困难的人所面临的问题却没有得到关注,而是第二大问题,但人们对其了解较少。解决方案是存在的,但通常没有安装、隐藏、难以找到,而且难以理解和使用。为了解决这些问题,一个由组织、公司和个人组成的国际联盟正在探索和开发Windows和macOS操作系统的开源扩展。它结合了自动个性化、分层和增强的发现,并能够按需安装(IoD)用户需要的任何辅助技术。这款名为Morphic的软件现在已经安装在美国和加拿大几所主要大学和图书馆校园里的所有电脑上。它使计算机更易于使用,并且允许人们在他们遇到的任何一台计算机(带有Morphic)上出现任何需要的功能或辅助技术,并希望在学校,工作,图书馆,社区中心等地方使用。本演示将涵盖基本和高级功能,以及如何获得免费的开源软件副本,并将其配置为学校,工作或个人使用。它还将强调从实习中吸取的经验教训。
{"title":"An Open-source Tool for Simplifying Computer and Assistive Technology Use: Tool for simplification and auto-personalization of computers and assistive technologies.","authors":"Gregg C Vanderheiden,&nbsp;J Bern Jordan","doi":"10.1145/3441852.3476554","DOIUrl":"https://doi.org/10.1145/3441852.3476554","url":null,"abstract":"Computer access is increasingly critical for all aspects of life from education to employment to daily living, health and almost all types of participation. The pandemic has highlighted our dependence on technology, but the dependence existed before and is continuing after. Yet many face barriers due to disability, literacy, or digital literacy. Although the problems faced by individuals with disabilities have received focus for some time, the problems faced by people who just have difficulty in using technologies has not, but is a second large, yet less understood problem. Solutions exist but are often not installed, buried, hard to find, and difficult to understand and use. To address these problems, an open-source extension to the Windows and macOS operating systems has been under exploration and development by an international consortium of organizations, companies, and individuals. It combines auto-personalization, layering, and enhanced discovery, with the ability to Install on Demand (IoD) any assistive technologies a user needs. The software, called Morphic, is now installed on all of the computers across campus at several major universities and libraries in the US and Canada. It makes computers simpler to use, and allows whichever features or assistive technologies a person needs to appear on any computer they encounter (that has Morphic on it) and want to use at school, work, library, community center, etc. This demonstration will cover both the basic and advanced features as well as how to get free copies of the open-source software and configure it for school, work or personal use. It will also highlight lessons learned from the placements.","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8620129/pdf/nihms-1752258.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39942022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind. 通过头戴式摄像头获取路人近距离信号:盲人的机会和限制。
Pub Date : 2021-01-01 DOI: 10.1145/3441852.3471232
Kyungjun Lee, Daisuke Sato, Saki Asakawa, Chieko Asakawa, Hernisa Kacorri

The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby for blind people. Analyzing data collected in a study with blind (N=10) and sighted (N=40) participants, we explore: (i) visual information on approaching passersby captured by a head-worn camera; (ii) pedestrian detection algorithms for extracting proxemic signals such as passerby presence, relative position, distance, and head pose; and (iii) opportunities and limitations of using wearable cameras for helping blind people access proxemics related to nearby people. Our observations and findings provide insights into dyadic behaviors for assistive pedestrian detection and lead to implications for the design of future head-worn cameras and interactions.

在大流行期间,路人的空间行为对盲人发起互动、保持个人空间或保持社交距离至关重要。在其他用例中,采用计算机视觉的可穿戴摄像头可用于提取他人的近距离信号,从而增加盲人对行人空间行为的访问。通过对盲人(N=10)和正常人(N=40)的研究数据进行分析,我们探讨了:(i)头戴式摄像头捕捉到的接近行人的视觉信息;(ii)行人检测算法,用于提取邻近信号,如行人存在、相对位置、距离和头姿;(三)使用可穿戴相机帮助盲人获取与周围人相关的近距信息的机会和局限性。我们的观察和发现为辅助行人检测的二元行为提供了见解,并对未来头戴式摄像头和交互的设计产生了影响。
{"title":"Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind.","authors":"Kyungjun Lee,&nbsp;Daisuke Sato,&nbsp;Saki Asakawa,&nbsp;Chieko Asakawa,&nbsp;Hernisa Kacorri","doi":"10.1145/3441852.3471232","DOIUrl":"https://doi.org/10.1145/3441852.3471232","url":null,"abstract":"<p><p>The spatial behavior of passersby can be critical to blind individuals to initiate interactions, preserve personal space, or practice social distancing during a pandemic. Among other use cases, wearable cameras employing computer vision can be used to extract proxemic signals of others and thus increase access to the spatial behavior of passersby for blind people. Analyzing data collected in a study with blind (N=10) and sighted (N=40) participants, we explore: (i) visual information on approaching passersby captured by a head-worn camera; (ii) pedestrian detection algorithms for extracting proxemic signals such as passerby presence, relative position, distance, and head pose; and (iii) opportunities and limitations of using wearable cameras for helping blind people access proxemics related to nearby people. Our observations and findings provide insights into dyadic behaviors for assistive pedestrian detection and lead to implications for the design of future head-worn cameras and interactions.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855357/pdf/nihms-1752252.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sharing Practices for Datasets Related to Accessibility and Aging. 与无障碍环境和老龄化相关的数据集共享实践。
Pub Date : 2021-01-01 DOI: 10.1145/3441852.3471208
Rie Kamikubo, Utkarsh Dwivedi, Hernisa Kacorri

Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years. Our analysis highlights how researchers navigate tensions between benefits and risks in data collection and sharing. We uncover patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus. We conclude by critically reflecting on challenges and opportunities related to locating and sharing accessibility datasets calling for technical, legal, and institutional privacy frameworks that are more attuned to concerns from these communities.

来自残疾人和老年人的数据集在辅助性和包容性人工智能应用的创新、基准设定和减少偏见方面发挥着重要作用。然而,这些数据却非常稀缺。我们对过去 35 年中人工收集的 137 个跨学科无障碍数据集进行了系统回顾。我们的分析强调了研究人员在数据收集和共享过程中如何处理利益与风险之间的矛盾。我们揭示了不同关注群体在数据收集目的、术语、样本大小、数据类型和数据共享实践方面的模式。最后,我们批判性地反思了与定位和共享可访问性数据集相关的挑战和机遇,呼吁建立技术、法律和机构隐私框架,以更贴近这些群体的关切。
{"title":"Sharing Practices for Datasets Related to Accessibility and Aging.","authors":"Rie Kamikubo, Utkarsh Dwivedi, Hernisa Kacorri","doi":"10.1145/3441852.3471208","DOIUrl":"10.1145/3441852.3471208","url":null,"abstract":"<p><p>Datasets sourced from people with disabilities and older adults play an important role in innovation, benchmarking, and mitigating bias for both assistive and inclusive AI-infused applications. However, they are scarce. We conduct a systematic review of 137 accessibility datasets manually located across different disciplines over the last 35 years. Our analysis highlights how researchers navigate tensions between benefits and risks in data collection and sharing. We uncover patterns in data collection purpose, terminology, sample size, data types, and data sharing practices across communities of focus. We conclude by critically reflecting on challenges and opportunities related to locating and sharing accessibility datasets calling for technical, legal, and institutional privacy frameworks that are more attuned to concerns from these communities.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855358/pdf/nihms-1752251.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncovering Patterns in Reviewers' Feedback to Scene Description Authors. 从审稿人对场景描述作者的反馈中发现模式。
Pub Date : 2021-01-01 DOI: 10.1145/3441852.3476550
Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara

Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) Quality; commenting on different AD quality variables, (ii) Speech Act; the utterance or speech action that the reviewers used, (iii) Required Action; the recommended action that the authors should do to improve the AD, and (iv) Guidance; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.

音频描述(ADs)可以增加盲人观看视频的机会。研究人员探索了生成音频描述的不同机制,最近的一些研究涉及付费的新手;为了提高音频描述的质量,新手会收到审稿人的反馈。然而,审稿人的反馈并不是即时的。为了探索通过自动化实现实时反馈的潜力,我们在本文中分析了 40 名视力正常的新手从视力正常或失明的审稿人处收到的 120 条评论。我们发现,反馈模式往往分为四个主题:(i) 质量;对不同广告质量变量的评论;(ii) 语言行为;审稿人使用的语句或语言行为;(iii) 要求采取的行动;建议作者采取的行动以改进广告;以及 (iv) 指导;审稿人为帮助作者而提供的额外帮助。我们讨论了这些模式中哪些可以在审稿过程中实现自动化,这对未来的 AD 协同创作系统具有设计意义。
{"title":"Uncovering Patterns in Reviewers' Feedback to Scene Description Authors.","authors":"Rosiana Natalie, Jolene Loh Kar Inn, Tan Huei Suen, Joshua Tseng Shi Hao, Hernisa Kacorri, Kotaro Hara","doi":"10.1145/3441852.3476550","DOIUrl":"10.1145/3441852.3476550","url":null,"abstract":"<p><p>Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) <b>Quality</b>; commenting on different AD quality variables, (ii) <b>Speech Act</b>; the utterance or speech action that the reviewers used, (iii) <b>Required Action</b>; the recommended action that the authors should do to improve the AD, and (iv) <b>Guidance</b>; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855355/pdf/nihms-1752255.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Efficacy of Collaborative Authoring of Video Scene Descriptions. 视频场景描述协同创作的有效性研究。
Pub Date : 2021-01-01 DOI: 10.1145/3441852.3471201
Rosiana Natalie, Joshua Tseng, Jolene Loh, Ian Luke Yi-Ren Chan, Huei Suen Tan, Ebrima H Jarjue, Hernisa Kacorri, Kotaro Hara

The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more cost-effective audio descriptions that are also of high quality by involving novices. Specifically, we designed, developed, and evaluated ViScene, a web-based collaborative audio description authoring tool that enables a sighted novice author and a reviewer either sighted or blind to interact and contribute to scene descriptions (SDs)-text that can be transformed into audio through text-to-speech. Through a mixed-design study with N = 60 participants, we assessed the quality of SDs created by sighted novices with feedback from both sighted and blind reviewers. Our results showed that with ViScene novices could produce content that is Descriptive, Objective, Referable, and Clear at a cost of i.e., US$2.81pvm to US$5.48pvm, which is 54% to 96% lower than the professional service. However, the descriptions lacked in other quality dimensions (e.g., learning, a measure of how well an SD conveys the video's intended message). While professional audio describers remain the gold standard, for content creators who cannot afford it, ViScene offers a cost-effective alternative, ultimately leading to a more accessible medium.

由于缺乏描述视频场景的音频描述,大多数在线视频内容对视障人士来说仍然是无法访问的。传统上,内容创作者依赖专业人士来撰写音频描述,但他们的服务价格昂贵,而且不容易获得。我们研究了通过涉及新手来创建更具有成本效益的高质量音频描述的可行性。具体来说,我们设计、开发并评估了ViScene,这是一个基于网络的协作音频描述创作工具,它使视力正常的新手作者和视力正常或失明的审稿人能够进行交互并为场景描述(SDs)做出贡献——可以通过文本到语音的方式将文本转换为音频。通过一项有N = 60名参与者的混合设计研究,我们评估了由视力正常的新手根据视力正常和失明的评论者的反馈创建的SDs的质量。我们的研究结果表明,使用ViScene,新手可以以2.81至5.48美元的成本制作描述性、客观性、可参考性和清晰性的内容,比专业服务低54%至96%。然而,这些描述缺乏其他质量维度(例如,学习,衡量SD传达视频预期信息的程度)。虽然专业音频描述器仍然是黄金标准,但对于负担不起的内容创作者来说,ViScene提供了一个经济实惠的替代方案,最终导致更易于访问的媒体。
{"title":"The Efficacy of Collaborative Authoring of Video Scene Descriptions.","authors":"Rosiana Natalie,&nbsp;Joshua Tseng,&nbsp;Jolene Loh,&nbsp;Ian Luke Yi-Ren Chan,&nbsp;Huei Suen Tan,&nbsp;Ebrima H Jarjue,&nbsp;Hernisa Kacorri,&nbsp;Kotaro Hara","doi":"10.1145/3441852.3471201","DOIUrl":"https://doi.org/10.1145/3441852.3471201","url":null,"abstract":"<p><p>The majority of online video contents remain inaccessible to people with visual impairments due to the lack of audio descriptions to depict the video scenes. Content creators have traditionally relied on professionals to author audio descriptions, but their service is costly and not readily-available. We investigate the feasibility of creating more cost-effective audio descriptions that are also of high quality by involving novices. Specifically, we designed, developed, and evaluated ViScene, a web-based collaborative audio description authoring tool that enables a sighted novice author and a reviewer either sighted or blind to interact and contribute to scene descriptions (SDs)-text that can be transformed into audio through text-to-speech. Through a mixed-design study with <i>N</i> = 60 participants, we assessed the quality of SDs created by sighted novices with feedback from both sighted and blind reviewers. Our results showed that with ViScene novices could produce content that is Descriptive, Objective, Referable, and Clear at a cost of <i>i.e.,</i> US$2.81pvm to US$5.48pvm, which is 54% to 96% lower than the professional service. However, the descriptions lacked in other quality dimensions (<i>e.g.,</i> learning, a measure of how well an SD conveys the video's intended message). While professional audio describers remain the gold standard, for content creators who cannot afford it, ViScene offers a cost-effective alternative, ultimately leading to a more accessible medium.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8855356/pdf/nihms-1752253.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users. TableView:为屏幕放大镜用户提供对Web数据记录的有效访问。
Pub Date : 2020-10-01 DOI: 10.1145/3373625.3417030
Hae-Na Lee, Sami Uddin, Vikas Ashok

People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.

有视觉障碍的人通常依靠屏幕放大辅助技术与网页进行交互。由于屏幕放大镜用户在任何时候只能以放大的形式查看部分网页内容,他们必须忍受一个不方便和费力的过程,即在网页的不同部分来回移动放大镜焦点,以便对数据记录进行比较,例如根据价格、持续时间等比较旅游网站上的可用航班。为了解决这个问题,我们设计并开发了TableView,这是一个浏览器扩展,它利用最先进的信息提取方法来自动识别和提取网页中的数据记录及其属性,并随后以紧凑排列的表格格式将它们呈现给用户,与目前页面中这些项目占用的屏幕空间相比,需要的屏幕空间要少得多。这样,TableView就能够在放大镜焦点中打包更多的项目,从而减少了整个内容区域的平移,从而使屏幕放大镜用户在做出选择之前更容易比较不同的项目。一项针对16名弱视参与者的用户研究表明,使用TableView,与使用屏幕放大镜相比,在网页中平移数据记录所花费的时间显著减少了72.9%(平均),与使用空间压缩方法的屏幕放大镜相比,减少了66.5%。
{"title":"TableView: Enabling Eficient Access to Web Data Records for Screen-Magnifier Users.","authors":"Hae-Na Lee,&nbsp;Sami Uddin,&nbsp;Vikas Ashok","doi":"10.1145/3373625.3417030","DOIUrl":"https://doi.org/10.1145/3373625.3417030","url":null,"abstract":"<p><p>People with visual impairments typically rely on screen-magnifier assistive technology to interact with webpages. As screen-magnifier users can only view a portion of the webpage content in an enlarged form at any given time, they have to endure an inconvenient and arduous process of repeatedly moving the magnifier focus back-and-forth over different portions of the webpage in order to make comparisons between data records, e.g., comparing the available fights in a travel website based on their prices, durations, etc. To address this issue, we designed and developed TableView, a browser extension that leverages a state-of-the art information extraction method to automatically identify and extract data records and their attributes in a webpage, and subsequently presents them to a user in a compactly arranged tabular format that needs significantly less screen space compared to that currently occupied by these items in the page. This way, TableView is able to pack more items within the magnifier focus, thereby reducing the overall content area for panning, and hence making it easy for screen-magnifier users to compare different items before making their selections. A user study with 16 low vision participants showed that with TableView, the time spent on panning the data records in webpages was significantly reduced by 72.9% (avg.) compared to that with just a screen magnifier, and 66.5% compared to that with a screen magnifier using a space compaction method.</p>","PeriodicalId":72321,"journal":{"name":"ASSETS. Annual ACM Conference on Assistive Technologies","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3373625.3417030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25455684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
ASSETS. Annual ACM Conference on Assistive Technologies
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1