首页 > 最新文献

The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility最新文献

英文 中文
Self-selection of accessibility options 可访问性选项的自我选择
Nithin Santhanam, Shari Trewin, C. Swart, P. Santhanam
This study focuses on the use of web accessibility software by people with cerebral palsy performing three typical user tasks. We evaluate the customization options in the IBM accessibility Works add-on to the Mozilla Firefox browser, as used by ten users. While specific features provide significant benefit, we find that users tend to pick unnecessary options, resulting in a potentially negative user experience.
本研究主要关注脑瘫患者在执行三种典型用户任务时使用网页辅助功能软件。我们评估了由10个用户使用的Mozilla Firefox浏览器的IBM accessibility Works附加组件中的自定义选项。虽然特定的功能提供了显著的好处,但我们发现用户倾向于选择不必要的选项,从而导致潜在的负面用户体验。
{"title":"Self-selection of accessibility options","authors":"Nithin Santhanam, Shari Trewin, C. Swart, P. Santhanam","doi":"10.1145/2049536.2049605","DOIUrl":"https://doi.org/10.1145/2049536.2049605","url":null,"abstract":"This study focuses on the use of web accessibility software by people with cerebral palsy performing three typical user tasks. We evaluate the customization options in the IBM accessibility Works add-on to the Mozilla Firefox browser, as used by ten users. While specific features provide significant benefit, we find that users tend to pick unnecessary options, resulting in a potentially negative user experience.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123073454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The vlogging phenomena: a deaf perspective 视频博客现象:聋人视角
Ellen S. Hibbard, D. Fels
Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.
高度文本化的网站给聋人带来了障碍,主要使用美国手语进行交流。聋人以视频博客的形式在YouTube和Deafvideo.TV等专门网站上发布美国手语内容。本文介绍了在聋人社区中使用视频日志技术和技术的一些初步见解。研究结果表明,YouTube和Deafvideo之间存在差异。电视由于主流和专业网站的差异。视频日志技术似乎影响了面对面交流中没有发现或使用不同风格的使用。例子包括改变视频博主的签名空间,以在屏幕上传达不同的含义。
{"title":"The vlogging phenomena: a deaf perspective","authors":"Ellen S. Hibbard, D. Fels","doi":"10.1145/2049536.2049549","DOIUrl":"https://doi.org/10.1145/2049536.2049549","url":null,"abstract":"Highly textual websites present barriers to Deaf people, primarily using American Sign Language for communication. Deaf people have been posting ASL content in form of vlogs to YouTube and specialized websites such as Deafvideo.TV. This paper presents some of the first insights into the use of vlogging technology and techniques among the Deaf community. The findings suggest that there are differences between YouTube and Deafvideo.TV due to differences between mainstream and specialized sites. Vlogging technology seems to influence use of styles that are not found or are used differently in face-to-face communications. Examples include the alteration of vloggers' signing space to convey different meanings on screen.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131729592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Improving accessibility for deaf people: an editor for computer assisted translation through virtual avatars. 提高聋人的可及性:一个通过虚拟化身进行计算机辅助翻译的编辑器。
Davide Barberis, Nicola Garazzino, P. Prinetto, G. Tiotto
This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.
本文介绍了一种用于聋人意大利语书面语到意大利语手语计算机辅助翻译(CAT)的新工具ATLAS Editor for Assisted Translation (ALEAT)。该工具是ATLAS项目中开发的一个web应用程序,其目标是从意大利书面语言到意大利手语在天气预报领域的自动翻译。ALEAT将根据意大利语语法编写的文本作为输入,执行句子的自动翻译,并通过虚拟字符将翻译结果可视化,将其提供给用户。由于自动翻译容易出错,ALEAT允许在用户的干预下纠正它。翻译通过一种新的形式存储在数据库中:ATLAS书面扩展LIS (AEWLIS)。aaelis允许通过ATLAS可视化模块播放翻译,并将其加载到ALEAT中以进行后续修改和改进。
{"title":"Improving accessibility for deaf people: an editor for computer assisted translation through virtual avatars.","authors":"Davide Barberis, Nicola Garazzino, P. Prinetto, G. Tiotto","doi":"10.1145/2049536.2049593","DOIUrl":"https://doi.org/10.1145/2049536.2049593","url":null,"abstract":"This paper presents the ATLAS Editor for Assisted Translation (ALEAT), a novel tool for the Computer Assisted Translation (CAT) from Italian written language to Italian Sign Language (LIS) of Deaf People. The tool is a web application that has been developed within the ATLAS project, that targets the automatic translation from Italian written language to Italian Sign Language in the weather forecasts domain. ALEAT takes a text as input, written according to the Italian Language grammar, performs the automatic translation of the sentence and gives the result of the translation to the user by visualizing it through a virtual character. Since the automatic translation is error-prone, ALEAT allows to correct it with the intervention of the user. The translation is stored in a database resorting to a novel formalism: the ATLAS Written Extended LIS (AEWLIS). AEWLIS allows to play the translation through the ATLAS visualization module and to load it within ALEAT for successive modifications and improvement.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133914699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired 通过音频描述来理解3D场景:针对视障人士的Kinect-iPad融合
J. D. Gomez, Sinan Mohammed, G. Bologna, T. Pun
Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.
微软的Kinect 3D运动传感器是一种低成本的3D相机,可以提供室内环境的颜色和深度信息。在这个演示中,这个只有乐趣的相机的功能伴随着iPad的有形界面,是针对视障人士的利益。介绍了一种基于计算机视觉的实时目标定位及其音频描述框架。首先,从场景中提取物体并使用特征描述符和机器学习进行识别。其次,用乐器声音来标记被识别的物体,用虚拟空间声源来描述它们在三维空间中的位置。因此,当手指在iPad上触发声音时,可以听到和探索场景,并在上面映射出物体的俯视图。这使得蒙着眼睛的用户能够建立一个环境的心理占用网格。这里提出的方法带来了有效援助的希望,并可以在不久的将来改编为视障人士的电子旅行辅助工具。
{"title":"Toward 3D scene understanding via audio-description: Kinect-iPad fusion for the visually impaired","authors":"J. D. Gomez, Sinan Mohammed, G. Bologna, T. Pun","doi":"10.1145/2049536.2049613","DOIUrl":"https://doi.org/10.1145/2049536.2049613","url":null,"abstract":"Microsoft's Kinect 3-D motion sensor is a low cost 3D camera that provides color and depth information of indoor environments. In this demonstration, the functionality of this fun-only camera accompanied by an iPad's tangible interface is targeted to the benefit of the visually impaired. A computer-vision-based framework for real time objects localization and for their audio description is introduced. Firstly, objects are extracted from the scene and recognized using feature descriptors and machine-learning. Secondly, the recognized objects are labeled by instruments sounds, whereas their position in 3D space is described by virtual space sources of sound. As a result, the scene can be heard and explored while finger-triggering the sounds within the iPad, on which a top-view of the objects is mapped. This enables blindfolded users to build a mental occupancy grid of the environment. The approach presented here brings the promise of efficient assistance and could be adapted as an electronic travel aid for the visually-impaired in the near future.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Empowering individuals with do-it-yourself assistive technology 为个人提供自助辅助技术
A. Hurst, J. Tobias
Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to "Do-It-Yourself" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.
辅助技术使个人能够完成他们可能无法完成的任务。不幸的是,购买的辅助技术设备中有很大一部分(35%或更多)最终未被使用或丢弃[7,10],这使得许多人使用的辅助技术不适合他们的需求。辅助技术接受率低的原因有很多,但常见的因素包括:1)选择时不考虑用户意见;2)设备获取容易;3)设备性能差;4)用户需求和优先级的变化[7]。我们正在努力帮助更多的人获得他们需要的辅助技术,让非工程师“自己动手”(DIY),并创造、修改或建造。本文说明了定制辅助技术是可能的,并论证了为什么授权用户制作他们自己的辅助技术可以改善采用过程(以及随后的采用率)。我们讨论了个人的DIY经验和印象,他们之前要么建立了辅助技术,要么依赖于它。我们发现,对设计元素、激情和成本的控制增加,促使人们自己制作辅助技术,而不是购买它。我们讨论了新一代快速原型工具和在线社区如何赋予更多个人权力。我们将我们的发现综合到设计建议中,以帮助促进未来DIY-AT的成功。
{"title":"Empowering individuals with do-it-yourself assistive technology","authors":"A. Hurst, J. Tobias","doi":"10.1145/2049536.2049541","DOIUrl":"https://doi.org/10.1145/2049536.2049541","url":null,"abstract":"Assistive Technologies empower individuals to accomplish tasks they might not be able to do otherwise. Unfortunately, a large percentage of Assistive Technology devices that are purchased (35% or more) end up unused or abandoned [7,10], leaving many people with Assistive Technology that is inappropriate for their needs. Low acceptance rates of Assistive Technology occur for many reasons, but common factors include 1) lack of considering user opinion in selection, 2) ease in obtaining devices, 3) poor device performance, and 4) changes in user needs and priorities [7]. We are working to help more people gain access to the Assistive Technology they need by empowering non-engineers to \"Do-It-Yourself\" (DIY) and create, modify, or build. This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121929089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 237
Humsher: a predictive keyboard operated by humming 嗡嗡声:一种通过嗡嗡声操作的预测键盘
Ondrej Polácek, Z. Míkovec, Adam J. Sporka, P. Slavík
This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.
本文提出了一种新的基于非语言语音输入的文本输入方法——Humsher。该方法利用自适应语言模型进行文本预测。提出并比较了四种不同的用户界面。其中三个使用动态布局,其中n个字符图呈现给用户,根据给定上下文中的概率进行选择。最后一个界面使用静态布局,其中字符按字母顺序显示,并使用改进的二进制搜索算法来有效地选择字符。在一项涉及17名健全受试者的用户研究中,对所有界面进行了比较和评估。还对四名残疾人进行了个案研究,以验证该方法对运动障碍用户的潜力。最快的界面平均速度为14个字符/分钟,而最快的用户达到30个字符/分钟。经过7次练习后,残疾参与者的打字速度达到了每分钟14 - 22个字符。
{"title":"Humsher: a predictive keyboard operated by humming","authors":"Ondrej Polácek, Z. Míkovec, Adam J. Sporka, P. Slavík","doi":"10.1145/2049536.2049552","DOIUrl":"https://doi.org/10.1145/2049536.2049552","url":null,"abstract":"This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127444059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
In-vehicle assistive technology (IVAT) for drivers who have survived a traumatic brain injury 车载辅助技术(IVAT)的司机谁已经从创伤性脑损伤中幸存
Julia DeBlasio Olsheski, B. Walker, Jeff McCloud
IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.
IVAT(车载辅助技术)是由Shepherd中心辅助技术团队、佐治亚理工学院超声实验室和Centrafuse合作开发的一种车载接口。这项技术的目的是通过考虑个人认知能力和局限性来提高驾驶员的安全性。虽然IVAT的潜在应用广泛,但当前研究的初始人群是创伤性脑损伤(TBI)的幸存者。脑外伤会导致各种各样的损伤,从而限制驾驶能力。IVAT的目的是使个人能够克服这些限制,以便在受伤后重新获得驾驶的独立性。
{"title":"In-vehicle assistive technology (IVAT) for drivers who have survived a traumatic brain injury","authors":"Julia DeBlasio Olsheski, B. Walker, Jeff McCloud","doi":"10.1145/2049536.2049595","DOIUrl":"https://doi.org/10.1145/2049536.2049595","url":null,"abstract":"IVAT (in-vehicle assistive technology) is an in-dash interface borne out from a collaborative effort between the Shepherd Center assistive technology team, the Georgia Tech Sonification Laboratory, and Centrafuse. The aim of this technology is to increase driver safety by taking individual cognitive abilities and limitations into account. While the potential applications of IVAT are widespread, the initial population of interest for the current research is survivors of a traumatic brain injury (TBI). TBI can cause a variety of impairments that limit driving ability. IVAT is aimed at enabling the individual to overcome these limitations in order to regain some independence by driving after injury.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127470899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Exploring iconographic interface in emergency for deaf 聋人急诊图像界面的探索
T. Pereira, Benjamim Fonseca, H. Paredes, Miriam Cabo
In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.
在这个演示中,我们展示了一个移动电话应用程序,它可以允许聋人和紧急医疗服务之间的通信,使用图像触摸界面。这个应用程序特别适用于聋人,但也适用于面临言语难以清晰表达的突发情况的非残疾人。
{"title":"Exploring iconographic interface in emergency for deaf","authors":"T. Pereira, Benjamim Fonseca, H. Paredes, Miriam Cabo","doi":"10.1145/2049536.2049589","DOIUrl":"https://doi.org/10.1145/2049536.2049589","url":null,"abstract":"In this demo, we present an application for mobile phones, which can allow communication between deaf and emergency medical services using an iconographic touch interface. This application can be useful especially for deaf but also for persons without disabilities that face sudden situations where speech is hard to articulate.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126900033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analyzing visual questions from visually impaired users 分析视障用户的视觉问题
Erin L. Brady
Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.
已经开发了许多新技术来帮助视障人士了解他们的环境,但人们对他们使用这些工具的动机知之甚少。我们的工具VizWiz允许用户用手机拍照,询问有关照片内容的问题,并几乎实时地收到答案。本研究调查了视障用户询问周围环境问题的模式,并展示了来自人类和计算机资源的响应的优点和局限性。
{"title":"Analyzing visual questions from visually impaired users","authors":"Erin L. Brady","doi":"10.1145/2049536.2049622","DOIUrl":"https://doi.org/10.1145/2049536.2049622","url":null,"abstract":"Many new technologies have been developed to assist people who are visually impaired in learning about their environment, but there is little understanding of their motivations for using these tools. Our tool VizWiz allows users to take a picture using their mobile phone, ask a question about the picture's contents, and receive an answer in nearly realtime. This study investigates patterns in the questions that visually impaired users ask about their surroundings, and presents the benefits and limitations of responses from both human and computerized sources.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116374321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
StopFinder: improving the experience of blind public transit riders with crowdsourcing StopFinder:通过众包的方式改善盲人公共交通乘客的体验
Sanjana Prasain
I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.
我开发了一个移动设备系统,为盲人公交乘客众包公交车站周围的地标。我研究的主要重点是开发一种方法,为盲人公交乘客提供关于公交车站周围地标的可靠和准确的信息。除此之外,我的研究重点是了解这些信息的获取如何影响他们使用公共交通工具。
{"title":"StopFinder: improving the experience of blind public transit riders with crowdsourcing","authors":"Sanjana Prasain","doi":"10.1145/2049536.2049629","DOIUrl":"https://doi.org/10.1145/2049536.2049629","url":null,"abstract":"I developed a system for mobile devices for crowdsourcing landmarks around bus stops for blind transit riders. The main focus of my research is to develop a method to provide reliable and accurate information about landmarks around bus stops to blind transit riders. In addition to that, my research focuses on understanding how access to such information affects their use of public transportation.","PeriodicalId":351090,"journal":{"name":"The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115542496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1