首页 > 最新文献

Proceedings of the 17th International Web for All Conference最新文献

英文 中文
Autism detection based on eye movement sequences on the web: a scanpath trend analysis approach 基于网络眼动序列的自闭症检测:一种扫描路径趋势分析方法
Pub Date : 2020-04-20 DOI: 10.1145/3371300.3383340
Sukru Eraslan, Y. Yeşilada, Victoria Yaneva, S. Harper
Autism diagnostic procedure is a subjective, challenging and expensive procedure and relies on behavioral, historical and parental report information. In our previous, we proposed a machine learning classifier to be used as a potential screening tool or used in conjunction with other diagnostic methods, thus aiding established diagnostic methods. The classifier uses eye movements of people on web pages but it only considers non-sequential data. It achieves the best accuracy by combining data from several web pages and it has varying levels of accuracy on different web pages. In this present paper, we investigate whether it is possible to detect autism based on eye-movement sequences and achieve stable accuracy across different web pages to be not dependent on specific web pages. We used Scanpath Trend Analysis (STA) which is designed for identifying a trending path of a group of users on a web page based on their eye movements. We first identify trending paths of people with autism and neurotypical people. To detect whether or not a person has autism, we calculate the similarity of his/her path to the trending paths of people with autism and neurotypical people. If the path is more similar to the trending path of neurotypical people, we classify the person as a neurotypical person. Otherwise, we classify her/him as a person with autism. We systematically evaluate our approach with an eye-tracking dataset of 15 verbal and highly-independent people with autism and 15 neurotypical people on six web pages. Our evaluation shows that the STA approach performs better on individual web pages and provides more stable accuracy across different pages.
自闭症诊断过程是一个主观的,具有挑战性和昂贵的过程,依赖于行为,历史和父母报告信息。在我们之前的文章中,我们提出了一个机器学习分类器作为潜在的筛选工具或与其他诊断方法结合使用,从而帮助建立诊断方法。该分类器利用人们在网页上的眼球运动,但只考虑非顺序数据。它通过组合来自多个网页的数据来达到最佳精度,并且在不同的网页上具有不同的精度水平。在本文中,我们研究了基于眼动序列的自闭症检测是否有可能在不同的网页上实现稳定的准确性,而不依赖于特定的网页。我们使用扫描路径趋势分析(STA),这是为了根据一组用户的眼球运动来识别他们在网页上的趋势路径。我们首先确定自闭症患者和神经正常人群的趋势路径。为了检测一个人是否患有自闭症,我们计算他/她的路径与自闭症患者和神经正常人群的趋势路径的相似性。如果路径与神经典型者的趋势路径更相似,我们就把这个人归类为神经典型者。否则,我们将她/他归类为自闭症患者。我们系统地评估了我们的方法,使用了一个眼动追踪数据集,该数据集由15名语言和高度独立的自闭症患者和15名神经正常的人在6个网页上组成。我们的评估表明,STA方法在单个网页上表现更好,并且在不同页面之间提供更稳定的准确性。
{"title":"Autism detection based on eye movement sequences on the web: a scanpath trend analysis approach","authors":"Sukru Eraslan, Y. Yeşilada, Victoria Yaneva, S. Harper","doi":"10.1145/3371300.3383340","DOIUrl":"https://doi.org/10.1145/3371300.3383340","url":null,"abstract":"Autism diagnostic procedure is a subjective, challenging and expensive procedure and relies on behavioral, historical and parental report information. In our previous, we proposed a machine learning classifier to be used as a potential screening tool or used in conjunction with other diagnostic methods, thus aiding established diagnostic methods. The classifier uses eye movements of people on web pages but it only considers non-sequential data. It achieves the best accuracy by combining data from several web pages and it has varying levels of accuracy on different web pages. In this present paper, we investigate whether it is possible to detect autism based on eye-movement sequences and achieve stable accuracy across different web pages to be not dependent on specific web pages. We used Scanpath Trend Analysis (STA) which is designed for identifying a trending path of a group of users on a web page based on their eye movements. We first identify trending paths of people with autism and neurotypical people. To detect whether or not a person has autism, we calculate the similarity of his/her path to the trending paths of people with autism and neurotypical people. If the path is more similar to the trending path of neurotypical people, we classify the person as a neurotypical person. Otherwise, we classify her/him as a person with autism. We systematically evaluate our approach with an eye-tracking dataset of 15 verbal and highly-independent people with autism and 15 neurotypical people on six web pages. Our evaluation shows that the STA approach performs better on individual web pages and provides more stable accuracy across different pages.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89594515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Towards generating web-accessible STEM documents from PDF 从PDF生成可网络访问的STEM文档
Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383351
V. Sorge, Akashdeep Bansal, Neha Jadhav, Himanshu Garg, Ayushi Verma, M. Balakrishnan
PDF is still a very popular format that is widely used to exchange and archive electronic documents. And although considerable efforts have been made to ensure accessibility of PDF documents, they are still far from ideal when complex content like formulas, diagrams or tables is present. Unfortunately, many publications in scientific subjects are available in PDF format only and are therefore, if at all, only partially accessible. In this paper, we present a fully automated web-based technology to convert PDF documents into an accessible single file format. We concentrate on presenting working solutions for mathematical formulas and tables while also discussing some of the open problems in this context and how we aim to solve them in the future.
PDF仍然是一种非常流行的格式,广泛用于交换和存档电子文档。尽管在确保PDF文档的可访问性方面已经做出了相当大的努力,但当出现复杂的内容(如公式、图表或表格)时,它们仍然远远不够理想。不幸的是,许多科学主题的出版物仅以PDF格式提供,因此,如果有的话,只能部分访问。在本文中,我们提出了一种完全自动化的基于web的技术,将PDF文档转换为可访问的单一文件格式。我们专注于展示数学公式和表格的工作解决方案,同时也讨论了在此背景下的一些开放问题以及我们未来如何解决这些问题。
{"title":"Towards generating web-accessible STEM documents from PDF","authors":"V. Sorge, Akashdeep Bansal, Neha Jadhav, Himanshu Garg, Ayushi Verma, M. Balakrishnan","doi":"10.1145/3371300.3383351","DOIUrl":"https://doi.org/10.1145/3371300.3383351","url":null,"abstract":"PDF is still a very popular format that is widely used to exchange and archive electronic documents. And although considerable efforts have been made to ensure accessibility of PDF documents, they are still far from ideal when complex content like formulas, diagrams or tables is present. Unfortunately, many publications in scientific subjects are available in PDF format only and are therefore, if at all, only partially accessible. In this paper, we present a fully automated web-based technology to convert PDF documents into an accessible single file format. We concentrate on presenting working solutions for mathematical formulas and tables while also discussing some of the open problems in this context and how we aim to solve them in the future.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85726776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Game changer: accessible audio and tactile guidance for board and card games 游戏改变者:可访问的音频和触觉指导棋盘和纸牌游戏
Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383347
Gabriella M. Johnson, Shaun K. Kane
While board games are a popular social activity, their reliance on visual information can create accessibility problems for blind and visually impaired players. Because some players cannot easily read cards or locate pieces, they may be at a disadvantage or may be unable to play a game without sighted help. We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players. In this paper, we describe the design of Game Changer and present findings from a user study in which seven blind participants used Game Changer to play against a sighted partner. Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.
虽然桌面游戏是一种受欢迎的社交活动,但它们对视觉信息的依赖可能会给盲人和视力受损的玩家带来访问问题。因为有些玩家不容易读牌或定位棋子,他们可能处于不利地位,或者在没有视力帮助的情况下无法玩游戏。我们展示了Game Changer,一个增强的工作空间,提供音频描述和触觉添加,使盲人和视障玩家可以访问棋盘游戏的状态。在本文中,我们描述了Game Changer的设计,并介绍了一项用户研究的结果,在这项研究中,7名盲人参与者使用Game Changer与一个视力正常的伙伴进行游戏。大多数玩家认为game Changer增加了游戏的易用性,并认为game Changer可以用来增强其他游戏。
{"title":"Game changer: accessible audio and tactile guidance for board and card games","authors":"Gabriella M. Johnson, Shaun K. Kane","doi":"10.1145/3371300.3383347","DOIUrl":"https://doi.org/10.1145/3371300.3383347","url":null,"abstract":"While board games are a popular social activity, their reliance on visual information can create accessibility problems for blind and visually impaired players. Because some players cannot easily read cards or locate pieces, they may be at a disadvantage or may be unable to play a game without sighted help. We present Game Changer, an augmented workspace that provides both audio descriptions and tactile additions to make the state of the board game accessible to blind and visually impaired players. In this paper, we describe the design of Game Changer and present findings from a user study in which seven blind participants used Game Changer to play against a sighted partner. Most players stated the game was more accessible with the additions from Game Changer and felt that Game Changer could be used to augment other games.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72946850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using a participatory activities toolkit to elicit privacy expectations of adaptive assistive technologies 使用参与性活动工具包来引出自适应辅助技术的隐私期望
Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383336
Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst
Individuals whose abilities change over time can benefit from assistive technologies that can detect and adapt to their current needs. While these Adaptive Assistive Technologies (AATs) offer exciting opportunities, their use presents an often-overlooked privacy tradeoff between usability and disclosing ability data. To explore this tradeoff from end-user perspectives, we developed a participatory activities toolkit comprised of tangible low-fidelity physical cards, charts, and two software AAT prototypes. We used the kit in interviews with six older adults who experience pointing and typing difficulties when accessing the Internet. Participants had conflicting views about AATs collecting their data, and strong preferences about what data should be collected, how should it be used, and who should have access to it. The contributions of this paper are twofold: (1) we describe a novel approach to elicit detailed end-user privacy preferences and expectations, and (2) we provide insights from representative users of AATs towards their privacy.
能力随时间变化的个人可以从辅助技术中受益,这些技术可以检测并适应他们当前的需求。虽然这些自适应辅助技术(aat)提供了令人兴奋的机会,但它们的使用在可用性和披露能力数据之间提出了一个经常被忽视的隐私权衡。为了从终端用户的角度探索这种权衡,我们开发了一个参与性活动工具包,该工具包由有形的低保真度物理卡片、图表和两个软件AAT原型组成。我们用这个工具包采访了6位在上网时遇到指向和打字困难的老年人。参与者对AATs收集他们的数据有不同的看法,对应该收集什么数据、如何使用数据以及谁应该访问这些数据有强烈的偏好。本文的贡献是双重的:(1)我们描述了一种新的方法来引出详细的终端用户隐私偏好和期望,(2)我们提供了来自aat的代表性用户对其隐私的见解。
{"title":"Using a participatory activities toolkit to elicit privacy expectations of adaptive assistive technologies","authors":"Foad Hamidi, Kellie Poneres, Aaron K. Massey, A. Hurst","doi":"10.1145/3371300.3383336","DOIUrl":"https://doi.org/10.1145/3371300.3383336","url":null,"abstract":"Individuals whose abilities change over time can benefit from assistive technologies that can detect and adapt to their current needs. While these Adaptive Assistive Technologies (AATs) offer exciting opportunities, their use presents an often-overlooked privacy tradeoff between usability and disclosing ability data. To explore this tradeoff from end-user perspectives, we developed a participatory activities toolkit comprised of tangible low-fidelity physical cards, charts, and two software AAT prototypes. We used the kit in interviews with six older adults who experience pointing and typing difficulties when accessing the Internet. Participants had conflicting views about AATs collecting their data, and strong preferences about what data should be collected, how should it be used, and who should have access to it. The contributions of this paper are twofold: (1) we describe a novel approach to elicit detailed end-user privacy preferences and expectations, and (2) we provide insights from representative users of AATs towards their privacy.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"126 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76676282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deaf and hard-of-hearing users' prioritization of genres of online video content requiring accurate captions 聋人和听力障碍用户对需要准确字幕的在线视频内容类型的优先级
Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383337
Larwan Berke, Matthew Seita, Matt Huenerfauth
Online video is an important information source, yet its pace of growth, including user-submitted content, is so rapid that automatic captioning technologies are needed to make content accessible for people who are Deaf or Hard-of-Hearing (DHH). To support future creation of a research dataset of online videos, we must prioritize which genres of online video content DHH users believe are of greatest importance to be accurately captioned. Our first contribution is to validate that the Best-Worst Scaling (BWS) methodology is able to accurately gather judgments on this topic by conducting an in-person study with 25 DHH users, using a card-sorting methodology to rank the importance for various YouTube genres of online video to be accurately captioned. Our second contribution is to identify video genres of highest captioning importance via an online survey with 151 DHH individuals, and those participants highly ranked: News and Politics, Education, and Technology and Science.
在线视频是一种重要的信息来源,但它的增长速度(包括用户提交的内容)如此之快,以至于需要自动字幕技术来使失聪或听障人士(DHH)能够访问这些内容。为了支持未来在线视频研究数据集的创建,我们必须优先考虑DHH用户认为最重要的在线视频内容类型,以便准确地添加字幕。我们的第一个贡献是通过对25名DHH用户进行亲自研究,使用卡片排序方法对各种YouTube在线视频类型的重要性进行排名,从而验证最佳最差分级(BWS)方法能够准确地收集有关该主题的判断。我们的第二个贡献是通过对151名DHH个人的在线调查确定字幕重要性最高的视频类型,这些参与者排名靠前:新闻与政治,教育,技术与科学。
{"title":"Deaf and hard-of-hearing users' prioritization of genres of online video content requiring accurate captions","authors":"Larwan Berke, Matthew Seita, Matt Huenerfauth","doi":"10.1145/3371300.3383337","DOIUrl":"https://doi.org/10.1145/3371300.3383337","url":null,"abstract":"Online video is an important information source, yet its pace of growth, including user-submitted content, is so rapid that automatic captioning technologies are needed to make content accessible for people who are Deaf or Hard-of-Hearing (DHH). To support future creation of a research dataset of online videos, we must prioritize which genres of online video content DHH users believe are of greatest importance to be accurately captioned. Our first contribution is to validate that the Best-Worst Scaling (BWS) methodology is able to accurately gather judgments on this topic by conducting an in-person study with 25 DHH users, using a card-sorting methodology to rank the importance for various YouTube genres of online video to be accurately captioned. Our second contribution is to identify video genres of highest captioning importance via an online survey with 151 DHH individuals, and those participants highly ranked: News and Politics, Education, and Technology and Science.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85422725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Accessible conversational user interfaces: considerations for design 可访问的会话用户界面:设计注意事项
Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383343
K. Lister, Tim Coughlan, Francisco Iniesto, N. Freear, P. Devine
Conversational user interfaces (CUIs), such as chatbots and voice assistants, are increasingly common in areas of day-to-day life, and can be expected to become ever more pervasive in the future. These interfaces are being designed for ever more complex interactions, and they appear to have potential to be beneficial to people with disabilities to interact through the web and with technologies embedded in the environment. However, to fulfil this promise they need to be designed to be accessible. This paper reviews a range of current guidance, reports, research and literature on accessible design for different disability groups, including users with mental health issues, autism, health conditions, cognitive disabilities, dyslexia or learning difficulties, and sensory, mobility or dexterity impairments. We collate the elements from this body of guidance that appear relevant to the design of accessible CUIs, and instances where guidance presents issues which are less conclusive, and require further exploration. Using this, we develop a set of questions which could be useful in the further research and development of accessible CUIs. We conclude by considering why CUIs could present opportunities for furthering accessibility, by introducing an example of this potential - a project to design an assistant to support students to disclose their disabilities and organise support, without the need to fill in forms.
会话用户界面(gui),如聊天机器人和语音助手,在日常生活中越来越普遍,并且可以预期在未来会变得更加普遍。这些界面是为更复杂的交互而设计的,它们似乎有潜力帮助残疾人通过网络和嵌入环境中的技术进行交互。然而,为了实现这一承诺,它们需要被设计成可访问的。本文回顾了目前针对不同残疾群体的无障碍设计的一系列指导、报告、研究和文献,包括有精神健康问题、自闭症、健康状况、认知障碍、阅读障碍或学习困难、感觉、行动或灵巧障碍的用户。我们整理了本指南中与易访问ui设计相关的元素,以及指南中提出的不太确定的问题,需要进一步探索的实例。利用这一点,我们提出了一组问题,这些问题可能对进一步研究和开发可访问ui有用。最后,我们考虑了为什么图形用户界面可以提供进一步促进无障碍的机会,并介绍了一个例子——一个设计助手的项目,该助手可以帮助学生披露他们的残疾,并组织支持,而无需填写表格。
{"title":"Accessible conversational user interfaces: considerations for design","authors":"K. Lister, Tim Coughlan, Francisco Iniesto, N. Freear, P. Devine","doi":"10.1145/3371300.3383343","DOIUrl":"https://doi.org/10.1145/3371300.3383343","url":null,"abstract":"Conversational user interfaces (CUIs), such as chatbots and voice assistants, are increasingly common in areas of day-to-day life, and can be expected to become ever more pervasive in the future. These interfaces are being designed for ever more complex interactions, and they appear to have potential to be beneficial to people with disabilities to interact through the web and with technologies embedded in the environment. However, to fulfil this promise they need to be designed to be accessible. This paper reviews a range of current guidance, reports, research and literature on accessible design for different disability groups, including users with mental health issues, autism, health conditions, cognitive disabilities, dyslexia or learning difficulties, and sensory, mobility or dexterity impairments. We collate the elements from this body of guidance that appear relevant to the design of accessible CUIs, and instances where guidance presents issues which are less conclusive, and require further exploration. Using this, we develop a set of questions which could be useful in the further research and development of accessible CUIs. We conclude by considering why CUIs could present opportunities for furthering accessibility, by introducing an example of this potential - a project to design an assistant to support students to disclose their disabilities and organise support, without the need to fill in forms.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90542602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Tables on the web accessible?: unfortunately not! 表格在网络上可访问?不幸的是没有!
Pub Date : 2020-04-17 DOI: 10.1145/3371300.3383349
Waqar Haider, Y. Yeşilada
Web accessibility guidelines, in particular, WCAG (Web Content Accessibility Guidelines), covers a wide range of recommendations for making web content more accessible. They have technical guidance on making certain structures accessible such as tables. Even though there are many studies that investigate the accessibility of certain types of web sites or web sites from certain countries, to our knowledge, there is no specific study that looks at the accessibility of tables on the web. In this paper, we present a systematic study that analyzes the accessibility of more than 16,000 table elements, crawled from more than 900 different web pages. This study shows that tables are still widely used for layout, and the guidelines related to data tables in WCAG are not followed. Our research is vital in demonstrating the need for smart systems that automatically handle the accessibility of structures such as tables.
Web易访问性指南,特别是WCAG (Web内容易访问性指南),涵盖了使Web内容更易于访问的广泛建议。他们有关于使某些结构(如表)可访问的技术指导。尽管有许多研究调查了特定类型的网站或来自特定国家的网站的可访问性,但据我们所知,还没有专门的研究关注网络上表格的可访问性。在本文中,我们提出了一项系统的研究,分析了从900多个不同的网页中抓取的16,000多个表元素的可访问性。本研究表明,表格仍然被广泛用于布局,并且没有遵循WCAG中与数据表相关的指南。我们的研究在展示智能系统的必要性方面至关重要,智能系统可以自动处理表格等结构的可访问性。
{"title":"Tables on the web accessible?: unfortunately not!","authors":"Waqar Haider, Y. Yeşilada","doi":"10.1145/3371300.3383349","DOIUrl":"https://doi.org/10.1145/3371300.3383349","url":null,"abstract":"Web accessibility guidelines, in particular, WCAG (Web Content Accessibility Guidelines), covers a wide range of recommendations for making web content more accessible. They have technical guidance on making certain structures accessible such as tables. Even though there are many studies that investigate the accessibility of certain types of web sites or web sites from certain countries, to our knowledge, there is no specific study that looks at the accessibility of tables on the web. In this paper, we present a systematic study that analyzes the accessibility of more than 16,000 table elements, crawled from more than 900 different web pages. This study shows that tables are still widely used for layout, and the guidelines related to data tables in WCAG are not followed. Our research is vital in demonstrating the need for smart systems that automatically handle the accessibility of structures such as tables.","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90055620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Saliency-driven Video Magnifier for People with Low Vision. 专为低视力人士设计的显著性视频放大镜。
Pub Date : 2020-04-01 Epub Date: 2020-04-20 DOI: 10.1145/3371300.3383356
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan

Consuming video content poses significant challenges for many screen magnifier users, which is the "go to" assistive technology for people with low vision. While screen magnifier software could be used to achieve a zoom factor that would make the content of the video visible to low-vision users, it is oftentimes a major challenge for these users to navigate through videos. Towards making videos more accessible for low-vision users, we have developed the SViM video magnifier system [6]. Specifically, SViM consists of three different magnifier interfaces with easy-to-use means of interactions. All three interfaces are driven by visual saliency as a guided signal, which provides a quantification of interestingness at the pixel-level. Saliency information, which is provided as a heatmap is then processed to obtain distinct regions of interest. These regions of interests are tracked over time and displayed using an easy-to-use interface. We present a description of our overall design and interfaces.

对于许多屏幕放大镜用户来说,观看视频内容是一项重大挑战,而屏幕放大镜是低视力人群的“首选”辅助技术。虽然屏幕放大软件可以用来实现缩放系数,使视频内容对低视力用户可见,但对于这些用户来说,浏览视频通常是一个主要挑战。为了让低视力用户更容易观看视频,我们开发了SViM视频放大系统[6]。具体来说,SViM由三种不同的放大镜接口组成,具有易于使用的交互方式。所有三个界面都是由视觉显著性作为引导信号驱动的,这在像素级提供了量化的兴趣。以热图形式提供的显著性信息随后被处理以获得感兴趣的不同区域。随着时间的推移,这些兴趣区域被跟踪,并使用易于使用的界面显示。我们给出了我们的整体设计和界面的描述。
{"title":"A Saliency-driven Video Magnifier for People with Low Vision.","authors":"Ali Selman Aydin,&nbsp;Shirin Feiz,&nbsp;Vikas Ashok,&nbsp;I V Ramakrishnan","doi":"10.1145/3371300.3383356","DOIUrl":"https://doi.org/10.1145/3371300.3383356","url":null,"abstract":"<p><p>Consuming video content poses significant challenges for many screen magnifier users, which is the \"go to\" assistive technology for people with low vision. While screen magnifier software could be used to achieve a zoom factor that would make the content of the video visible to low-vision users, it is oftentimes a major challenge for these users to navigate through videos. Towards making videos more accessible for low-vision users, we have developed the SViM video magnifier system [6]. Specifically, SViM consists of three different magnifier interfaces with easy-to-use means of interactions. All three interfaces are driven by visual saliency as a guided signal, which provides a quantification of interestingness at the pixel-level. Saliency information, which is provided as a heatmap is then processed to obtain distinct regions of interest. These regions of interests are tracked over time and displayed using an easy-to-use interface. We present a description of our overall design and interfaces.</p>","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3371300.3383356","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39266880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Indoor Localization for Visually Impaired Travelers Using Computer Vision on a Smartphone. 利用智能手机上的计算机视觉为视力受损的旅行者进行室内定位。
Pub Date : 2020-04-01 DOI: 10.1145/3371300.3383345
Giovanni Fusco, James M Coughlan

Wayfinding is a major challenge for visually impaired travelers, who generally lack access to visual cues such as landmarks and informational signs that many travelers rely on for navigation. Indoor wayfinding is particularly challenging since the most commonly used source of location information for wayfinding, GPS, is inaccurate indoors. We describe a computer vision approach to indoor localization that runs as a real-time app on a conventional smartphone, which is intended to support a full-featured wayfinding app in the future that will include turn-by-turn directions. Our approach combines computer vision, existing informational signs such as Exit signs, inertial sensors and a 2D map to estimate and track the user's location in the environment. An important feature of our approach is that it requires no new physical infrastructure. While our approach requires the user to either hold the smartphone or wear it (e.g., on a lanyard) with the camera facing forward while walking, it has the advantage of not forcing the user to aim the camera towards specific signs, which would be challenging for people with low or no vision. We demonstrate the feasibility of our approach with five blind travelers navigating an indoor space, with localization accuracy of roughly 1 meter once the localization algorithm has converged.

对于视力受损的旅行者来说,寻路是一项重大挑战,因为他们通常无法获得地标和信息标志等视觉提示,而这些正是许多旅行者赖以导航的地方。室内寻路尤其具有挑战性,因为最常用的寻路位置信息源 GPS 在室内并不准确。我们介绍了一种计算机视觉室内定位方法,该方法可作为一个实时应用程序在传统智能手机上运行,其目的是支持未来的全功能寻路应用程序,其中包括逐向导航。我们的方法结合了计算机视觉、现有的信息标志(如出口标志)、惯性传感器和二维地图,以估计和跟踪用户在环境中的位置。我们的方法的一个重要特点是不需要新的物理基础设施。虽然我们的方法要求用户在行走时手持智能手机或佩戴智能手机(如系在挂绳上),摄像头朝向前方,但它的优点是不强迫用户将摄像头对准特定的标志,这对于视力低下或没有视力的人来说具有挑战性。我们用五位盲人旅行者在室内空间中的导航演示了这种方法的可行性,一旦定位算法收敛,定位精度大约为 1 米。
{"title":"Indoor Localization for Visually Impaired Travelers Using Computer Vision on a Smartphone.","authors":"Giovanni Fusco, James M Coughlan","doi":"10.1145/3371300.3383345","DOIUrl":"10.1145/3371300.3383345","url":null,"abstract":"<p><p>Wayfinding is a major challenge for visually impaired travelers, who generally lack access to visual cues such as landmarks and informational signs that many travelers rely on for navigation. Indoor wayfinding is particularly challenging since the most commonly used source of location information for wayfinding, GPS, is inaccurate indoors. We describe a computer vision approach to indoor localization that runs as a real-time app on a conventional smartphone, which is intended to support a full-featured wayfinding app in the future that will include turn-by-turn directions. Our approach combines computer vision, existing informational signs such as Exit signs, inertial sensors and a 2D map to estimate and track the user's location in the environment. An important feature of our approach is that it requires no new physical infrastructure. While our approach requires the user to either hold the smartphone or wear it (e.g., on a lanyard) with the camera facing forward while walking, it has the advantage of not forcing the user to aim the camera towards specific signs, which would be challenging for people with low or no vision. We demonstrate the feasibility of our approach with five blind travelers navigating an indoor space, with localization accuracy of roughly 1 meter once the localization algorithm has converged.</p>","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"2020 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7643919/pdf/nihms-1611172.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38583060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 17th International Web for All Conference 第17届国际网络会议论文集
Pub Date : 2020-01-01 DOI: 10.1145/3371300
{"title":"Proceedings of the 17th International Web for All Conference","authors":"","doi":"10.1145/3371300","DOIUrl":"https://doi.org/10.1145/3371300","url":null,"abstract":"","PeriodicalId":93137,"journal":{"name":"Proceedings of the 17th International Web for All Conference","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76400134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 17th International Web for All Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1