首页 > 最新文献

Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
Due Process and Primary Jurisdiction Doctrine: A Threat to Accessibility Research and Practice? 正当程序和初级管辖权原则:对无障碍研究和实践的威胁?
J. Lazar
The Web Content Accessibility Guidelines (WCAG) is the most well-documented, well-accepted, set of interface guidelines on the planet, based on empirical research and a participatory process of stakeholder input. A recent case in a U.S. Federal District Court, Robles v. Dominos Pizza LLC, involved a blind individual requesting that Dominos Pizza make their web site and mobile app accessible for people with disabilities, utilizing the WCAG. The court ruled that, due to the legal concepts of due process and primary jurisdiction doctrine, the plaintiff loses the case simply for asking for the WCAG. This court ruling minimizes the importance of evidence-based accessibility research and guidelines, and this poster will provide a background of the case, describe preliminary analysis of related cases, and discuss implications for accessibility researchers.
Web内容可访问性指南(WCAG)是世界上记录最完善、最被广泛接受的一套界面指南,它基于实证研究和涉众输入的参与性过程。最近,在美国联邦地区法院,罗布斯诉达美乐披萨有限责任公司一案中,一名盲人要求达美乐披萨利用WCAG使其网站和移动应用程序对残疾人无障碍。法院裁定,根据正当程序和初级管辖权原则的法律概念,原告仅因要求WCAG而败诉。这一法院裁决将基于证据的无障碍研究和指导方针的重要性降到最低,这张海报将提供案件的背景,描述相关案件的初步分析,并讨论对无障碍研究人员的影响。
{"title":"Due Process and Primary Jurisdiction Doctrine: A Threat to Accessibility Research and Practice?","authors":"J. Lazar","doi":"10.1145/3234695.3241022","DOIUrl":"https://doi.org/10.1145/3234695.3241022","url":null,"abstract":"The Web Content Accessibility Guidelines (WCAG) is the most well-documented, well-accepted, set of interface guidelines on the planet, based on empirical research and a participatory process of stakeholder input. A recent case in a U.S. Federal District Court, Robles v. Dominos Pizza LLC, involved a blind individual requesting that Dominos Pizza make their web site and mobile app accessible for people with disabilities, utilizing the WCAG. The court ruled that, due to the legal concepts of due process and primary jurisdiction doctrine, the plaintiff loses the case simply for asking for the WCAG. This court ruling minimizes the importance of evidence-based accessibility research and guidelines, and this poster will provide a background of the case, describe preliminary analysis of related cases, and discuss implications for accessibility researchers.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129608060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Session details: Session 7: Enhancing Navigation 会议详情:会议7:增强导航
Anke M. Brock
{"title":"Session details: Session 7: Enhancing Navigation","authors":"Anke M. Brock","doi":"10.1145/3284381","DOIUrl":"https://doi.org/10.1145/3284381","url":null,"abstract":"","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134311209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a Technology-based Tool to Support Idea Generation during Participatory Design with Children with Autism Spectrum Disorders 以技术为基础的工具,支持自闭症谱系障碍儿童参与设计过程中的创意产生
A. Constantin, J. Hourcade
Our research explores the development of a novel technology-based prototype to support children and designers during brainstorming, one of the most challenging activities within Participatory Design (PD). This paper describes a proof-of-concept prototype for a tool that aims to empower children with Autism Spectrum Disorders (ASD) during PD to maximise their contributions to the design and their own benefits. Preliminary results revealed that the prototype has the potential for reducing anxiety in children with ASD, and supports unlocking their creativity.
我们的研究探索了一种新的基于技术的原型的开发,以支持儿童和设计师在头脑风暴中,这是参与式设计(PD)中最具挑战性的活动之一。本文描述了一种工具的概念验证原型,该工具旨在使患有自闭症谱系障碍(ASD)的儿童在PD期间最大限度地提高他们对设计的贡献和他们自己的利益。初步结果显示,该原型有可能减少自闭症儿童的焦虑,并支持释放他们的创造力。
{"title":"Toward a Technology-based Tool to Support Idea Generation during Participatory Design with Children with Autism Spectrum Disorders","authors":"A. Constantin, J. Hourcade","doi":"10.1145/3234695.3240995","DOIUrl":"https://doi.org/10.1145/3234695.3240995","url":null,"abstract":"Our research explores the development of a novel technology-based prototype to support children and designers during brainstorming, one of the most challenging activities within Participatory Design (PD). This paper describes a proof-of-concept prototype for a tool that aims to empower children with Autism Spectrum Disorders (ASD) during PD to maximise their contributions to the design and their own benefits. Preliminary results revealed that the prototype has the potential for reducing anxiety in children with ASD, and supports unlocking their creativity.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130602762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Applying Transfer Learning to Recognize Clothing Patterns Using a Finger-Mounted Camera 应用迁移学习识别服装模式使用手指安装的相机
Lee Stearns, Leah Findlater, Jon E. Froehlich
Color identification tools do not identify visual patterns or allow users to quickly inspect multiple locations, which are both important for identifying clothing. We are exploring the use of a finger-based camera that allows users to query clothing colors and patterns by touch. Previously, we demonstrated the feasibility of this approach using a small, highly-controlled dataset and combining two image classification techniques commonly used for object recognition. Here, to improve scalability and robustness, we collect a dataset of fabric images from online sources and apply transfer learning to train an end-to-end deep neural network to recognize visual patterns. This new approach achieves 92% accuracy in a general case and 97% when tuned for images from a finger-mounted camera.
颜色识别工具不能识别视觉模式或允许用户快速检查多个位置,这对于识别服装都很重要。我们正在探索使用一种基于手指的摄像头,用户可以通过触摸来查询衣服的颜色和图案。之前,我们使用一个小型的、高度控制的数据集,并结合两种通常用于对象识别的图像分类技术,证明了这种方法的可行性。为了提高可扩展性和鲁棒性,我们从在线资源中收集织物图像数据集,并应用迁移学习来训练端到端深度神经网络来识别视觉模式。这种新方法在一般情况下达到92%的准确率,当调整到来自手指安装的相机的图像时达到97%。
{"title":"Applying Transfer Learning to Recognize Clothing Patterns Using a Finger-Mounted Camera","authors":"Lee Stearns, Leah Findlater, Jon E. Froehlich","doi":"10.1145/3234695.3241015","DOIUrl":"https://doi.org/10.1145/3234695.3241015","url":null,"abstract":"Color identification tools do not identify visual patterns or allow users to quickly inspect multiple locations, which are both important for identifying clothing. We are exploring the use of a finger-based camera that allows users to query clothing colors and patterns by touch. Previously, we demonstrated the feasibility of this approach using a small, highly-controlled dataset and combining two image classification techniques commonly used for object recognition. Here, to improve scalability and robustness, we collect a dataset of fabric images from online sources and apply transfer learning to train an end-to-end deep neural network to recognize visual patterns. This new approach achieves 92% accuracy in a general case and 97% when tuned for images from a finger-mounted camera.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133538243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Exploring the Performance of Facial Expression Recognition Technologies on Deaf Adults and Their Children 面部表情识别技术在聋人成人及其儿童中的应用研究
I. Shaffer
Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages. Without being properly trained, both human observers and existing emotion recognition tools will misinterpret ASL linguistic facial expressions. In this study, we capture over 2,000 photographs of 15 participants: five hearing, five Deaf, and five Children of Deaf Adults (CODAs). We then analyze the performance of six commercial facial expression recognition services on these photographs. Key observations include poor face detection rates for Deaf participants, more accurate emotion recognition for Deaf and CODA participants, and frequent misinterpretation of ASL linguistic markers as negative emotions. This suggests a need to include data from ASL users in the training sets for these technologies.
面部和头部运动在美国手语和其他手语中具有重要的语言作用。如果没有经过适当的训练,人类观察者和现有的情感识别工具都会误解美国手语的面部表情。在这项研究中,我们拍摄了15名参与者的2000多张照片:5名听力正常的人,5名聋人,5名聋人成人的孩子(CODAs)。然后,我们分析了六种商业面部表情识别服务在这些照片上的表现。主要观察结果包括聋人参与者的面部识别率较低,聋人和CODA参与者的情绪识别更准确,以及经常将美国手语语言标记误解为负面情绪。这表明需要在这些技术的训练集中包括来自美国手语用户的数据。
{"title":"Exploring the Performance of Facial Expression Recognition Technologies on Deaf Adults and Their Children","authors":"I. Shaffer","doi":"10.1145/3234695.3240986","DOIUrl":"https://doi.org/10.1145/3234695.3240986","url":null,"abstract":"Facial and head movements have important linguistic roles in American Sign Language (ASL) and other sign languages. Without being properly trained, both human observers and existing emotion recognition tools will misinterpret ASL linguistic facial expressions. In this study, we capture over 2,000 photographs of 15 participants: five hearing, five Deaf, and five Children of Deaf Adults (CODAs). We then analyze the performance of six commercial facial expression recognition services on these photographs. Key observations include poor face detection rates for Deaf participants, more accurate emotion recognition for Deaf and CODA participants, and frequent misinterpretation of ASL linguistic markers as negative emotions. This suggests a need to include data from ASL users in the training sets for these technologies.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116145473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Automated Person Detection in Dynamic Scenes to Assist People with Vision Impairments: An Initial Investigation 动态场景中的自动人员检测,以帮助视力受损的人:初步调查
Lee Stearns, Anja Thieme
We propose a computer vision system that can automatically detect people in dynamic real-world scenes, enabling people with vision impairments to have more awareness of, and interactions with, other people in their surroundings. As an initial step, we investigate the feasibility of four camera systems that vary in their placement, field-of-view, and image distortion for: (i) capturing people generally; and (ii) detecting people via a specific person-pose estimator. Based on our findings, we discuss future opportunities and challenges for detecting people in dynamic scenes, and for communicating that information to visually impaired users.
我们提出了一种计算机视觉系统,可以自动检测动态现实场景中的人,使视力受损的人能够更好地意识到周围环境中的其他人,并与之互动。作为第一步,我们研究了四种不同位置、视场和图像失真的相机系统的可行性:(1)一般捕捉人;(ii)通过特定的人姿估计器来检测人。基于我们的发现,我们讨论了在动态场景中检测人的未来机遇和挑战,以及如何将这些信息传达给视障用户。
{"title":"Automated Person Detection in Dynamic Scenes to Assist People with Vision Impairments: An Initial Investigation","authors":"Lee Stearns, Anja Thieme","doi":"10.1145/3234695.3241017","DOIUrl":"https://doi.org/10.1145/3234695.3241017","url":null,"abstract":"We propose a computer vision system that can automatically detect people in dynamic real-world scenes, enabling people with vision impairments to have more awareness of, and interactions with, other people in their surroundings. As an initial step, we investigate the feasibility of four camera systems that vary in their placement, field-of-view, and image distortion for: (i) capturing people generally; and (ii) detecting people via a specific person-pose estimator. Based on our findings, we discuss future opportunities and challenges for detecting people in dynamic scenes, and for communicating that information to visually impaired users.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129650547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Understanding Authentication Method Use on Mobile Devices by People with Vision Impairment 了解视障人士在移动设备上使用的认证方法
Daniella Briotto Faustino, A. Girouard
Passwords help people avoid unauthorized access to their personal devices but are not without challenges, like memorability and shoulder surfing attacks. Little is known about how people with vision impairment assure their digital security in mobile contexts. We conducted an online survey to understand their strategies to remember passwords, their perceptions of authentication methods and their self-assessed ability to keep their digital information safe. We collected answers from 325 people who are blind or have low vision from 12 countries and found: most use familiar names and numbers to create memorable passwords, the majority consider fingerprint to be the most secure and accessible user authentication method and PINs the least secure user authentication method. This paper presents our survey results and provides insights for designing better authentication methods for people with vision impairment.
密码可以帮助人们避免未经授权的访问他们的个人设备,但也不是没有挑战,比如记忆性攻击和肩部冲浪攻击。对于视力受损的人如何在移动环境中确保他们的数字安全,人们知之甚少。我们进行了一项在线调查,以了解他们记忆密码的策略,他们对身份验证方法的看法,以及他们自我评估的保护数字信息安全的能力。我们收集了来自12个国家的325名盲人或弱视人士的答案,发现:大多数人使用熟悉的名字和数字来创建令人难忘的密码,大多数人认为指纹是最安全、最容易使用的用户认证方法,而个人识别码是最不安全的用户认证方法。本文介绍了我们的调查结果,并为设计更好的视障人士认证方法提供了一些见解。
{"title":"Understanding Authentication Method Use on Mobile Devices by People with Vision Impairment","authors":"Daniella Briotto Faustino, A. Girouard","doi":"10.1145/3234695.3236342","DOIUrl":"https://doi.org/10.1145/3234695.3236342","url":null,"abstract":"Passwords help people avoid unauthorized access to their personal devices but are not without challenges, like memorability and shoulder surfing attacks. Little is known about how people with vision impairment assure their digital security in mobile contexts. We conducted an online survey to understand their strategies to remember passwords, their perceptions of authentication methods and their self-assessed ability to keep their digital information safe. We collected answers from 325 people who are blind or have low vision from 12 countries and found: most use familiar names and numbers to create memorable passwords, the majority consider fingerprint to be the most secure and accessible user authentication method and PINs the least secure user authentication method. This paper presents our survey results and provides insights for designing better authentication methods for people with vision impairment.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132537455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
"Siri Talks at You": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind “Siri对你说话”:盲人使用声控个人助理(VAPA)的实证调查
A. Abdolrahmani, Ravi Kuber, Stacy M. Branham
Voice-activated personal assistants (VAPAs)--like Amazon Echo or Apple Siri--offer considerable promise to individuals who are blind due to widespread adoption of these non-visual interaction platforms. However, studies have yet to focus on the ways in which these technologies are used by individuals who are blind, along with whether barriers are encountered during the process of interaction. To address this gap, we interviewed fourteen legally-blind adults with experience of home and/or mobile-based VAPAs. While participants appreciated the access VAPAs provided to inaccessible applications and services, they faced challenges relating to the input, responses from VAPAs, and control of information presented. User behavior varied depending on the situation or context of the interaction. Implications for design are suggested to support inclusivity when interacting with VAPAs. These include accounting for privacy and situational factors in design, examining ways to support concerns over trust, and synchronizing presentation of visual and non-visual cues.
声控个人助理(VAPAs)——如亚马逊Echo或苹果Siri——由于这些非视觉交互平台的广泛采用,为盲人提供了相当大的希望。然而,研究还没有集中在盲人如何使用这些技术,以及在互动过程中是否遇到障碍。为了解决这一差距,我们采访了14名有家庭和/或移动VAPAs经验的法定失明成年人。虽然与会者赞赏VAPAs为不可访问的应用程序和服务提供的访问,但他们面临着与VAPAs的输入、响应和所提供信息的控制有关的挑战。用户行为根据交互的情况或上下文而变化。建议在与vapa交互时对设计的影响支持包容性。这包括考虑设计中的隐私和情境因素,检查支持信任的方法,以及同步呈现视觉和非视觉提示。
{"title":"\"Siri Talks at You\": An Empirical Investigation of Voice-Activated Personal Assistant (VAPA) Usage by Individuals Who Are Blind","authors":"A. Abdolrahmani, Ravi Kuber, Stacy M. Branham","doi":"10.1145/3234695.3236344","DOIUrl":"https://doi.org/10.1145/3234695.3236344","url":null,"abstract":"Voice-activated personal assistants (VAPAs)--like Amazon Echo or Apple Siri--offer considerable promise to individuals who are blind due to widespread adoption of these non-visual interaction platforms. However, studies have yet to focus on the ways in which these technologies are used by individuals who are blind, along with whether barriers are encountered during the process of interaction. To address this gap, we interviewed fourteen legally-blind adults with experience of home and/or mobile-based VAPAs. While participants appreciated the access VAPAs provided to inaccessible applications and services, they faced challenges relating to the input, responses from VAPAs, and control of information presented. User behavior varied depending on the situation or context of the interaction. Implications for design are suggested to support inclusivity when interacting with VAPAs. These include accounting for privacy and situational factors in design, examining ways to support concerns over trust, and synchronizing presentation of visual and non-visual cues.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114089566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 112
Tangicraft Tangicraft
David Bar-El, Thomas Large, Lydia Davison, M. Worsley
With millions of players worldwide, Minecraft has become a rich context for playing, socializing and learning for children. However, as is the case with many video games, players must rely heavily on vision to navigate and participate in the game. We present our Work-In-Progress on Tangicraft, a multimodal interface designed to empower visually impaired children to play and collaborate around Minecraft. Our work includes two strands of prototypes. The first is a haptic sensing wearable. The second is a set of tangible blocks that communicate with the game environment using webcam-enabled codes.
{"title":"Tangicraft","authors":"David Bar-El, Thomas Large, Lydia Davison, M. Worsley","doi":"10.1145/3234695.3241031","DOIUrl":"https://doi.org/10.1145/3234695.3241031","url":null,"abstract":"With millions of players worldwide, Minecraft has become a rich context for playing, socializing and learning for children. However, as is the case with many video games, players must rely heavily on vision to navigate and participate in the game. We present our Work-In-Progress on Tangicraft, a multimodal interface designed to empower visually impaired children to play and collaborate around Minecraft. Our work includes two strands of prototypes. The first is a haptic sensing wearable. The second is a set of tangible blocks that communicate with the game environment using webcam-enabled codes.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114654930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Design and Testing of Sensors for Text Entry and Mouse Control for Individuals with Neuromuscular Diseases 神经肌肉疾病患者文本输入和鼠标控制传感器的设计与测试
Anna M. H. Abrams, Carl Fridolin Weber, P. Beckerle
For individuals having a motor disorder of neuromuscular origin, computer usage can be challenging. Due to different medical conditions, alternative input methodologies such as speech or eye tracking are no option. Here, piezo sensors, inertial measurement units and force resistance sensors are used to develop input devices that can compensate for mouse and keyboard. The devices are tested in a case study with one potential user with ataxia. Future user studies will deliver additional insights in the users' specific needs and further improve the developments.
对于患有神经肌肉运动障碍的人来说,使用电脑是一项挑战。由于不同的医疗条件,语音或眼动追踪等替代输入方法是不可选择的。在这里,压电传感器、惯性测量单元和力电阻传感器被用来开发可以补偿鼠标和键盘的输入设备。这些设备在一个有共济失调的潜在用户的案例研究中进行了测试。未来的用户研究将提供更多关于用户具体需求的见解,并进一步改善发展。
{"title":"Design and Testing of Sensors for Text Entry and Mouse Control for Individuals with Neuromuscular Diseases","authors":"Anna M. H. Abrams, Carl Fridolin Weber, P. Beckerle","doi":"10.1145/3234695.3241012","DOIUrl":"https://doi.org/10.1145/3234695.3241012","url":null,"abstract":"For individuals having a motor disorder of neuromuscular origin, computer usage can be challenging. Due to different medical conditions, alternative input methodologies such as speech or eye tracking are no option. Here, piezo sensors, inertial measurement units and force resistance sensors are used to develop input devices that can compensate for mouse and keyboard. The devices are tested in a case study with one potential user with ataxia. Future user studies will deliver additional insights in the users' specific needs and further improve the developments.","PeriodicalId":110197,"journal":{"name":"Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114728359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1