首页 > 最新文献

Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility最新文献

英文 中文
An automated AR-based annotation tool for indoor navigation for visually impaired people 一个自动的基于ar的注释工具,用于视障人士的室内导航
Pei Du, N. Bulusu
Low vision people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments, or recognize objects efficiently. In this paper, we explore how Augmented Reality (AR) can be leveraged to design mobile applications to improve visual experience and unburden low vision persons. Specifically, we propose a novel automated AR-based annotation tool for detecting and labeling salient objects for assisted indoor navigation applications like NearbyExplorer. NearbyExplorer, which issues audio descriptions of nearby objects to the users, relies on a database populated by large teams of volunteers and map-a-thons to manually annotate salient objects in the environment like desks, chairs, low overhead ceilings. This has limited widespread and rapid deployment. Our tool builds on advances in automated object detection, AR labeling and accurate indoor positioning to provide an automated way to upload object labels and user position to a database, requiring just one volunteer. Moreover, it enables low vision people to detect and notice surrounding objects quickly using smartphones in various indoor environments.
视力低下的人每天都要面对许多障碍。传统的视觉增强功能不足以在室内环境中导航,或者有效地识别物体。在本文中,我们探讨了如何利用增强现实(AR)来设计移动应用程序,以改善视觉体验并减轻低视力人群的负担。具体来说,我们提出了一种新的基于ar的自动化标注工具,用于检测和标记辅助室内导航应用(如NearbyExplorer)的显著物体。NearbyExplorer向用户发布附近物体的音频描述,它依靠一个由大型志愿者团队和map-a-thons组成的数据库,手动标注环境中的显眼物体,比如桌子、椅子、低顶天花板。这限制了广泛和快速的部署。我们的工具建立在自动物体检测,AR标签和准确的室内定位的进步基础上,提供了一种自动上传物体标签和用户位置到数据库的方法,只需要一名志愿者。此外,它使低视力的人能够在各种室内环境中使用智能手机快速检测和注意周围的物体。
{"title":"An automated AR-based annotation tool for indoor navigation for visually impaired people","authors":"Pei Du, N. Bulusu","doi":"10.1145/3441852.3476561","DOIUrl":"https://doi.org/10.1145/3441852.3476561","url":null,"abstract":"Low vision people face many daily encumbrances. Traditional visual enhancements do not suffice to navigate indoor environments, or recognize objects efficiently. In this paper, we explore how Augmented Reality (AR) can be leveraged to design mobile applications to improve visual experience and unburden low vision persons. Specifically, we propose a novel automated AR-based annotation tool for detecting and labeling salient objects for assisted indoor navigation applications like NearbyExplorer. NearbyExplorer, which issues audio descriptions of nearby objects to the users, relies on a database populated by large teams of volunteers and map-a-thons to manually annotate salient objects in the environment like desks, chairs, low overhead ceilings. This has limited widespread and rapid deployment. Our tool builds on advances in automated object detection, AR labeling and accurate indoor positioning to provide an automated way to upload object labels and user position to a database, requiring just one volunteer. Moreover, it enables low vision people to detect and notice surrounding objects quickly using smartphones in various indoor environments.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sidewalk Gallery: An Interactive, Filterable Image Gallery of Over 500,000 Sidewalk Accessibility Problems 人行道画廊:一个互动的,可过滤的图像画廊,超过500,000个人行道无障碍问题
Michael Duan, Aroosh Kumar, Michael Saugstad, Aileen Zeng, Ilia Savin, Jon E. Froehlich
What do sidewalk accessibility problems look like? How might these problems differ across cities? In this poster paper, we introduce Sidewalk Gallery, an interactive, filterable gallery of over 500,000 crowdsourced sidewalk accessibility images across seven cities in two countries (US and Mexico). Gallery allows users to explore and interactively filter sidewalk images based on five primary accessibility problem types, 35 tag categories, and a 5-point severity scale. When browsing images, users can also provide feedback about data correctness. We envision Gallery as a tool for teaching in urban design and accessibility and as a visualization aid for disability advocacy.
人行道可达性问题是什么样的?这些问题在不同城市之间有何不同?在这张海报中,我们介绍了人行道画廊,这是一个互动的、可过滤的画廊,包含了两个国家(美国和墨西哥)七个城市的50多万张众包人行道无障碍图像。Gallery允许用户根据五种主要的可访问性问题类型、35个标签类别和5点严重性等级来探索和交互式过滤人行道图像。在浏览图片时,用户还可以提供关于数据正确性的反馈。我们将画廊设想为城市设计和无障碍教学的工具,并作为残疾人倡导的可视化辅助工具。
{"title":"Sidewalk Gallery: An Interactive, Filterable Image Gallery of Over 500,000 Sidewalk Accessibility Problems","authors":"Michael Duan, Aroosh Kumar, Michael Saugstad, Aileen Zeng, Ilia Savin, Jon E. Froehlich","doi":"10.1145/3441852.3476542","DOIUrl":"https://doi.org/10.1145/3441852.3476542","url":null,"abstract":"What do sidewalk accessibility problems look like? How might these problems differ across cities? In this poster paper, we introduce Sidewalk Gallery, an interactive, filterable gallery of over 500,000 crowdsourced sidewalk accessibility images across seven cities in two countries (US and Mexico). Gallery allows users to explore and interactively filter sidewalk images based on five primary accessibility problem types, 35 tag categories, and a 5-point severity scale. When browsing images, users can also provide feedback about data correctness. We envision Gallery as a tool for teaching in urban design and accessibility and as a visualization aid for disability advocacy.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130527989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Walker - An Autonomous, Interactive Walking Aid 助行器-一个自主的,互动的助行器
Johannes Hackbarth, Caspar Jacob
In this paper, we describe ongoing work about a robotic walker-frame that was designed to aid patients in an orthopaedic rehabilitation clinic. The so-called Walker is able to autonomously drive to patients and then changes into a more traditional walking-frame, i.e. one that has to be pushed by the patient, but it can still help by giving navigation instructions. Walker was designed with a multi-modal user interface in such a way that it can also be used by visually, hearing or speaking impaired people.
在这篇论文中,我们描述了正在进行的关于机器人步行架的工作,该框架旨在帮助骨科康复诊所的患者。所谓的“助行器”能够自动驾驶到病人身边,然后变成一个更传统的行走框架,也就是说,它必须由病人推动,但它仍然可以通过提供导航指示来提供帮助。沃克设计了一个多模态用户界面,这样它也可以被视觉、听觉或语言障碍的人使用。
{"title":"Walker - An Autonomous, Interactive Walking Aid","authors":"Johannes Hackbarth, Caspar Jacob","doi":"10.1145/3441852.3476552","DOIUrl":"https://doi.org/10.1145/3441852.3476552","url":null,"abstract":"In this paper, we describe ongoing work about a robotic walker-frame that was designed to aid patients in an orthopaedic rehabilitation clinic. The so-called Walker is able to autonomously drive to patients and then changes into a more traditional walking-frame, i.e. one that has to be pushed by the patient, but it can still help by giving navigation instructions. Walker was designed with a multi-modal user interface in such a way that it can also be used by visually, hearing or speaking impaired people.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122607074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adee: Bringing Accessibility Right Inside Design Tools Adee:将可访问性引入设计工具
Samine Hadadi
According to the world bank organization report, about 15 percent of the world’s population (equal to 1 billion people) experience some form of disability [3]. However, designers can easily forget to take account of disabilities such as colorblindness, as most designers are not colorblind and tools for accessibility are not integrated into design tools. In this work, we introduce and evaluate Adee, an accessibility testing tool that has been integrated into widely used design tools Adobe XD, Figma and Sketch. Adee aims to make accessibility part of the design process, to create inclusive and ethical products.
根据世界银行组织的报告,世界上约有15%的人口(相当于10亿人)患有某种形式的残疾[3]。然而,设计师很容易忘记考虑色盲等障碍,因为大多数设计师并不是色盲,辅助工具也没有整合到设计工具中。在这项工作中,我们介绍并评估了Adee,这是一个可访问性测试工具,已经集成到广泛使用的设计工具Adobe XD, Figma和Sketch中。Adee的目标是使可访问性成为设计过程的一部分,以创造包容性和道德的产品。
{"title":"Adee: Bringing Accessibility Right Inside Design Tools","authors":"Samine Hadadi","doi":"10.1145/3441852.3476478","DOIUrl":"https://doi.org/10.1145/3441852.3476478","url":null,"abstract":"According to the world bank organization report, about 15 percent of the world’s population (equal to 1 billion people) experience some form of disability [3]. However, designers can easily forget to take account of disabilities such as colorblindness, as most designers are not colorblind and tools for accessibility are not integrated into design tools. In this work, we introduce and evaluate Adee, an accessibility testing tool that has been integrated into widely used design tools Adobe XD, Figma and Sketch. Adee aims to make accessibility part of the design process, to create inclusive and ethical products.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126462992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Measuring Text Comprehension for People with Reading Difficulties Using a Mobile Application 使用移动应用程序测量阅读困难人群的文本理解能力
Andreas Säuberli
Measuring text comprehension is crucial for evaluating the accessibility of texts in Easy Language. However, accurate and objective comprehension tests tend to be expensive, time-consuming and sometimes difficult to implement for target groups of Easy Language. In this paper, we propose using computer-based testing with touchscreen devices as a means to simplify and accelerate data collection using comprehension tests, and to facilitate experiments with less proficient readers. We demonstrate this by designing and implementing a mobile touchscreen application and validating its effectiveness in an experiment with people with intellectual disabilities. The results suggest that there is no difference in terms of task difficulty between measuring comprehension using the mobile application and a traditional paper-and-pencil test. Moreover, reading times appear to be faster in the application than on paper.
衡量文本理解能力是评价《简易语言》文本可及性的关键。然而,准确和客观的理解测试往往是昂贵的,耗时的,有时难以实施的目标群体易语。在本文中,我们建议使用触屏设备进行基于计算机的测试,以简化和加速理解测试的数据收集,并方便不熟练的读者进行实验。我们通过设计和实现一个移动触摸屏应用程序来证明这一点,并在一个智障人士的实验中验证其有效性。结果表明,使用移动应用程序和传统的纸笔测试来测量理解能力在任务难度方面没有区别。此外,应用程序中的阅读时间似乎比在纸上阅读要快。
{"title":"Measuring Text Comprehension for People with Reading Difficulties Using a Mobile Application","authors":"Andreas Säuberli","doi":"10.1145/3441852.3476474","DOIUrl":"https://doi.org/10.1145/3441852.3476474","url":null,"abstract":"Measuring text comprehension is crucial for evaluating the accessibility of texts in Easy Language. However, accurate and objective comprehension tests tend to be expensive, time-consuming and sometimes difficult to implement for target groups of Easy Language. In this paper, we propose using computer-based testing with touchscreen devices as a means to simplify and accelerate data collection using comprehension tests, and to facilitate experiments with less proficient readers. We demonstrate this by designing and implementing a mobile touchscreen application and validating its effectiveness in an experiment with people with intellectual disabilities. The results suggest that there is no difference in terms of task difficulty between measuring comprehension using the mobile application and a traditional paper-and-pencil test. Moreover, reading times appear to be faster in the application than on paper.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125773767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rehabilitation through Accessible Mobile Gaming and Wearable Sensors 通过无障碍移动游戏和可穿戴传感器进行康复
D. Ahmetovic, Antonio Pugliese, S. Mascetti, Valentina Begnozzi, E. Boccalandro, R. Gualtierotti, F. Peyvandi
Play Access is an Android assistive technology that replaces touchscreen interaction with alternative interfaces, enabling people with upper extremity impairments to access mobile games, and providing alternative means of playing mobile games for all. We demonstrate the use of Play Access to support physical therapy for children with haemophilia, with the goal of preventing long-term mobility impairments. To achieve this, we modified Play Access to enable the use of body movements, recognized using wearable sensors, as an alternative interface for playing games. This way, Play Access makes it possible to use existing Android games as exergames, hence better targeting patients’ interest.
Play Access是一项Android辅助技术,它用替代界面取代触摸屏交互,使上肢障碍的人能够访问手机游戏,并为所有人提供玩手机游戏的替代方法。我们展示了使用Play Access来支持血友病儿童的物理治疗,目的是预防长期行动障碍。为了实现这一目标,我们修改了Play Access,允许使用可穿戴传感器识别的身体动作,作为玩游戏的替代界面。通过这种方式,Play Access可以将现有的Android游戏用作游戏,从而更好地瞄准患者的兴趣。
{"title":"Rehabilitation through Accessible Mobile Gaming and Wearable Sensors","authors":"D. Ahmetovic, Antonio Pugliese, S. Mascetti, Valentina Begnozzi, E. Boccalandro, R. Gualtierotti, F. Peyvandi","doi":"10.1145/3441852.3476544","DOIUrl":"https://doi.org/10.1145/3441852.3476544","url":null,"abstract":"Play Access is an Android assistive technology that replaces touchscreen interaction with alternative interfaces, enabling people with upper extremity impairments to access mobile games, and providing alternative means of playing mobile games for all. We demonstrate the use of Play Access to support physical therapy for children with haemophilia, with the goal of preventing long-term mobility impairments. To achieve this, we modified Play Access to enable the use of body movements, recognized using wearable sensors, as an alternative interface for playing games. This way, Play Access makes it possible to use existing Android games as exergames, hence better targeting patients’ interest.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fostering collaboration with asymmetric roles in accessible programming environments for children with mixed-visual-abilities 在无障碍编程环境中,为具有混合视觉能力的儿童促进非对称角色的协作
Filipa Rocha, Guilherme Guimarães, David Gonçalves, A. Pires, L. Abreu, T. Guerreiro
Introduction of computational thinking training in early childhood potentiates cognitive development and better prepares children to live and prosper in a future heavily computational society. Programming environments are now widely adopted in classrooms to teach programming concepts. However, these tools are often reliant on visual interaction, making them inaccessible to children with visual impairments. Also, programming environments in general are usually designed to promote individual experiences, wasting the potential benefits of group collaborative activities. We propose the design of a programming environment that leverages asymmetric roles to foster collaborative computational thinking activities for children with visual impairments, in particular mixed-visual-ability classes. The multimodal system comprises the use of tangible blocks and auditory feedback, while children have to collaborate to program a robot. We conducted a remote online study, collecting valuable feedback on the limitations and opportunities for future work, aiming to potentiate education and social inclusion.
在儿童早期引入计算思维训练可以增强认知发展,更好地为儿童在未来的重度计算社会中生活和繁荣做好准备。编程环境现在在课堂上被广泛采用来教授编程概念。然而,这些工具往往依赖于视觉交互,使有视觉障碍的儿童无法使用。此外,编程环境通常被设计为促进个人体验,从而浪费了群体协作活动的潜在好处。我们建议设计一个编程环境,利用不对称的角色来促进视觉障碍儿童的协作计算思维活动,特别是混合视觉能力课程。这个多模式系统包括使用有形的积木和听觉反馈,而孩子们必须协作来编程机器人。我们进行了一项远程在线研究,收集了关于未来工作的局限性和机会的宝贵反馈,旨在加强教育和社会融合。
{"title":"Fostering collaboration with asymmetric roles in accessible programming environments for children with mixed-visual-abilities","authors":"Filipa Rocha, Guilherme Guimarães, David Gonçalves, A. Pires, L. Abreu, T. Guerreiro","doi":"10.1145/3441852.3476553","DOIUrl":"https://doi.org/10.1145/3441852.3476553","url":null,"abstract":"Introduction of computational thinking training in early childhood potentiates cognitive development and better prepares children to live and prosper in a future heavily computational society. Programming environments are now widely adopted in classrooms to teach programming concepts. However, these tools are often reliant on visual interaction, making them inaccessible to children with visual impairments. Also, programming environments in general are usually designed to promote individual experiences, wasting the potential benefits of group collaborative activities. We propose the design of a programming environment that leverages asymmetric roles to foster collaborative computational thinking activities for children with visual impairments, in particular mixed-visual-ability classes. The multimodal system comprises the use of tangible blocks and auditory feedback, while children have to collaborate to program a robot. We conducted a remote online study, collecting valuable feedback on the limitations and opportunities for future work, aiming to potentiate education and social inclusion.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129779770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Colorable Band: A Wearable Device to Encourage Daily Decision Making Based on Behavior of Users with Color Vision Deficiency 可配色手环:一种可穿戴设备,鼓励有色觉缺陷的用户根据行为进行日常决策
A. Uehara
People with color vision deficiency (CVD) face several difficulties in performing daily tasks because they often fall outside of the culturally, linguistically, and educationally modulated majority opinion. This study aims to develop a device that can seamlessly input/output information based on the user's handling actions and to verify the validity of the support for daily decision-making of people with CVD. In this study, the use case is set as selecting clothes in a shop; online behavior observation is then conducted to design an assistive method and a watch-type device that shows useful information, such as the adjusted color and/or text for people with CVD on a display at the wrist is developed. An online user interview is conducted using a first-person perspective and bird's-eye perspective video with three CVD participants to verify the validity of the developed device for daily support. Consequently, the accuracy and effectiveness of the watch-type devices were determined. This study presents a prototyped proof-of-concept device in a remote environment, considering the coronavirus pandemic, and discusses the daily support for people with CVD.
有色觉缺陷(CVD)的人在执行日常任务时面临一些困难,因为他们经常落在文化、语言和教育上受调节的大多数人的意见之外。本研究旨在开发一种能够基于用户处理动作无缝输入/输出信息的设备,并验证其对心血管疾病患者日常决策支持的有效性。在本研究中,用例被设定为在商店中选择衣服;然后进行在线行为观察,以设计一种辅助方法,并开发了一种手表式设备,可以显示有用的信息,例如手腕上的显示器为心血管疾病患者调整颜色和/或文本。使用第一人称视角和鸟瞰视角视频与三名CVD参与者进行在线用户访谈,以验证所开发的设备用于日常支持的有效性。从而确定了表型装置的精度和有效性。考虑到冠状病毒大流行,本研究提出了一种远程环境下的概念验证原型设备,并讨论了对心血管疾病患者的日常支持。
{"title":"Colorable Band: A Wearable Device to Encourage Daily Decision Making Based on Behavior of Users with Color Vision Deficiency","authors":"A. Uehara","doi":"10.1145/3441852.3476570","DOIUrl":"https://doi.org/10.1145/3441852.3476570","url":null,"abstract":"People with color vision deficiency (CVD) face several difficulties in performing daily tasks because they often fall outside of the culturally, linguistically, and educationally modulated majority opinion. This study aims to develop a device that can seamlessly input/output information based on the user's handling actions and to verify the validity of the support for daily decision-making of people with CVD. In this study, the use case is set as selecting clothes in a shop; online behavior observation is then conducted to design an assistive method and a watch-type device that shows useful information, such as the adjusted color and/or text for people with CVD on a display at the wrist is developed. An online user interview is conducted using a first-person perspective and bird's-eye perspective video with three CVD participants to verify the validity of the developed device for daily support. Consequently, the accuracy and effectiveness of the watch-type devices were determined. This study presents a prototyped proof-of-concept device in a remote environment, considering the coronavirus pandemic, and discusses the daily support for people with CVD.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126586002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meeting Participants with Intellectual Disabilities during COVID-19 Pandemic: Challenges and Improvisation 与2019冠状病毒病大流行期间的智障人士会面:挑战和即兴发挥
L. Guedes, M. Landoni
With the COVID-19 pandemic, we all suffered from several restrictions and measures regulating interaction with one another. We had to wear masks, use hand sanitizer, have open-air meetings, feel a combination of excitement and frustration, and eventually depend on online video calls. The combinations of these additional requirements and limitations, while necessary, affected how we could involve users in the different stages of design. It has profoundly hindered our chances of meeting in person with people with temporary or permanent disabilities. In our project, involving people with intellectual disabilities in the museum context, we also had to deal with museums being closed and physical exhibitions being canceled. At the same time, guardians and caregivers often turned to a stricter interpretation of anti-COVID measures to protect people with intellectual disabilities. This paper aims to discuss these challenges and share our lessons about coping with challenging and unpredictable situations by using improvisation.
在新冠肺炎大流行期间,我们都受到了一些限制和措施的制约。我们不得不戴上口罩,使用洗手液,开露天会议,既兴奋又沮丧,最终依靠在线视频通话。这些额外的需求和限制的组合,虽然是必要的,但会影响我们在设计的不同阶段如何让用户参与进来。它极大地阻碍了我们与临时或永久性残疾者面对面接触的机会。在我们的项目中,涉及智障人士在博物馆的背景下,我们还必须处理博物馆关闭和实体展览取消的问题。与此同时,为了保护智障人士,监护人和照顾者往往对抗疫措施做出更严格的解释。本文旨在讨论这些挑战,并分享我们如何利用即兴表演来应对具有挑战性和不可预测的情况的经验教训。
{"title":"Meeting Participants with Intellectual Disabilities during COVID-19 Pandemic: Challenges and Improvisation","authors":"L. Guedes, M. Landoni","doi":"10.1145/3441852.3476566","DOIUrl":"https://doi.org/10.1145/3441852.3476566","url":null,"abstract":"With the COVID-19 pandemic, we all suffered from several restrictions and measures regulating interaction with one another. We had to wear masks, use hand sanitizer, have open-air meetings, feel a combination of excitement and frustration, and eventually depend on online video calls. The combinations of these additional requirements and limitations, while necessary, affected how we could involve users in the different stages of design. It has profoundly hindered our chances of meeting in person with people with temporary or permanent disabilities. In our project, involving people with intellectual disabilities in the museum context, we also had to deal with museums being closed and physical exhibitions being canceled. At the same time, guardians and caregivers often turned to a stricter interpretation of anti-COVID measures to protect people with intellectual disabilities. This paper aims to discuss these challenges and share our lessons about coping with challenging and unpredictable situations by using improvisation.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131538708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Increasing Access to Trainer-led Aerobic Exercise for People with Visual Impairments through a Sensor Mat System 通过传感器垫系统增加视觉障碍人士接受教练引导的有氧运动的机会
Jeehan Malik, Mitchell Majure, Hana Gabrielle Rubio Bidon, Regan Lamoureux, Kyle Rector
People with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is the lack of accessible group-based aerobic exercise classes, often due to instructors not giving accessible verbal instructions. While there is research in exercise tracking, these tools often require vision or familiarity with the exercise. There are accessible solutions that give personalized verbal feedback in slower-paced exercises, not generalizing to aerobics. In response, we have developed an algorithm that detects shoeprints on a sensor mat using computer vision and a CNN. We can infer whether a person is following along with a step aerobics workout and are designing reactive verbal feedback to guide the person to rejoin the class. Future work will include finishing development and conducting a user study to assess the effectiveness of the reactive verbal feedback.
与视力正常的同龄人相比,有视力障碍的人更不可能参加体育活动。一个障碍是缺乏以团体为基础的有氧运动课程,通常是由于教练没有给出方便的口头指导。虽然有关于运动跟踪的研究,但这些工具通常需要对运动有远见或熟悉。有一些可行的解决方案可以在慢节奏的运动中提供个性化的口头反馈,而不是泛化到有氧运动中。作为回应,我们开发了一种算法,利用计算机视觉和CNN来检测传感器垫上的鞋印。我们可以推断出一个人是否在进行有氧运动,并设计出反应性的口头反馈来引导这个人重新加入这个课程。未来的工作将包括完成开发并开展一项用户研究,以评估反应性口头反馈的有效性。
{"title":"Increasing Access to Trainer-led Aerobic Exercise for People with Visual Impairments through a Sensor Mat System","authors":"Jeehan Malik, Mitchell Majure, Hana Gabrielle Rubio Bidon, Regan Lamoureux, Kyle Rector","doi":"10.1145/3441852.3476557","DOIUrl":"https://doi.org/10.1145/3441852.3476557","url":null,"abstract":"People with visual impairments (PVIs) are less likely to participate in physical activity than their sighted peers. One barrier is the lack of accessible group-based aerobic exercise classes, often due to instructors not giving accessible verbal instructions. While there is research in exercise tracking, these tools often require vision or familiarity with the exercise. There are accessible solutions that give personalized verbal feedback in slower-paced exercises, not generalizing to aerobics. In response, we have developed an algorithm that detects shoeprints on a sensor mat using computer vision and a CNN. We can infer whether a person is following along with a step aerobics workout and are designing reactive verbal feedback to guide the person to rejoin the class. Future work will include finishing development and conducting a user study to assess the effectiveness of the reactive verbal feedback.","PeriodicalId":107277,"journal":{"name":"Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129865633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1